2025-07-12 19:13:36.074066 | Job console starting 2025-07-12 19:13:36.085838 | Updating git repos 2025-07-12 19:13:36.145577 | Cloning repos into workspace 2025-07-12 19:13:36.414399 | Restoring repo states 2025-07-12 19:13:36.448474 | Merging changes 2025-07-12 19:13:36.448495 | Checking out repos 2025-07-12 19:13:36.796159 | Preparing playbooks 2025-07-12 19:13:37.537639 | Running Ansible setup 2025-07-12 19:13:42.592799 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-12 19:13:43.568969 | 2025-07-12 19:13:43.569261 | PLAY [Base pre] 2025-07-12 19:13:43.594130 | 2025-07-12 19:13:43.594298 | TASK [Setup log path fact] 2025-07-12 19:13:43.625249 | orchestrator | ok 2025-07-12 19:13:43.649864 | 2025-07-12 19:13:43.650030 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 19:13:43.679884 | orchestrator | ok 2025-07-12 19:13:43.692285 | 2025-07-12 19:13:43.692419 | TASK [emit-job-header : Print job information] 2025-07-12 19:13:43.732457 | # Job Information 2025-07-12 19:13:43.732663 | Ansible Version: 2.16.14 2025-07-12 19:13:43.732699 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-07-12 19:13:43.732733 | Pipeline: post 2025-07-12 19:13:43.732756 | Executor: 521e9411259a 2025-07-12 19:13:43.732776 | Triggered by: https://github.com/osism/testbed/commit/5e211efd448c3d28ddba9683e94cf756230142f9 2025-07-12 19:13:43.732798 | Event ID: 4b1cdfae-5f54-11f0-8d55-f17017d61d19 2025-07-12 19:13:43.739764 | 2025-07-12 19:13:43.739892 | LOOP [emit-job-header : Print node information] 2025-07-12 19:13:43.856717 | orchestrator | ok: 2025-07-12 19:13:43.856933 | orchestrator | # Node Information 2025-07-12 19:13:43.857006 | orchestrator | Inventory Hostname: orchestrator 2025-07-12 19:13:43.857132 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-12 19:13:43.857165 | orchestrator | Username: zuul-testbed02 2025-07-12 19:13:43.857187 | orchestrator | Distro: Debian 12.11 2025-07-12 19:13:43.857211 | orchestrator | Provider: static-testbed 2025-07-12 19:13:43.857233 | orchestrator | Region: 2025-07-12 19:13:43.857255 | orchestrator | Label: testbed-orchestrator 2025-07-12 19:13:43.857275 | orchestrator | Product Name: OpenStack Nova 2025-07-12 19:13:43.857295 | orchestrator | Interface IP: 81.163.193.140 2025-07-12 19:13:43.907210 | 2025-07-12 19:13:43.907364 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-12 19:13:44.842247 | orchestrator -> localhost | changed 2025-07-12 19:13:44.850959 | 2025-07-12 19:13:44.851119 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-12 19:13:46.586334 | orchestrator -> localhost | changed 2025-07-12 19:13:46.602215 | 2025-07-12 19:13:46.602350 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-12 19:13:46.879741 | orchestrator -> localhost | ok 2025-07-12 19:13:46.888518 | 2025-07-12 19:13:46.888677 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-12 19:13:46.919251 | orchestrator | ok 2025-07-12 19:13:46.938045 | orchestrator | included: /var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-12 19:13:46.946563 | 2025-07-12 19:13:46.946690 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-12 19:13:49.842746 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-12 19:13:49.843385 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/work/b8321735e9ba42e18f9d24de95f698e9_id_rsa 2025-07-12 19:13:49.843495 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/work/b8321735e9ba42e18f9d24de95f698e9_id_rsa.pub 2025-07-12 19:13:49.843567 | orchestrator -> localhost | The key fingerprint is: 2025-07-12 19:13:49.843631 | orchestrator -> localhost | SHA256:PlAJgYlVY8w45arCPM7+/6mw/+qT+GMMLAh+smrSPq0 zuul-build-sshkey 2025-07-12 19:13:49.843690 | orchestrator -> localhost | The key's randomart image is: 2025-07-12 19:13:49.843768 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-12 19:13:49.843825 | orchestrator -> localhost | | o.OB. | 2025-07-12 19:13:49.843880 | orchestrator -> localhost | | . =ooo . | 2025-07-12 19:13:49.843933 | orchestrator -> localhost | | .. o | 2025-07-12 19:13:49.843985 | orchestrator -> localhost | |. . . | 2025-07-12 19:13:49.844035 | orchestrator -> localhost | |+ . . . S | 2025-07-12 19:13:49.844117 | orchestrator -> localhost | |++ = o | 2025-07-12 19:13:49.844176 | orchestrator -> localhost | |.=*o+ . o | 2025-07-12 19:13:49.844229 | orchestrator -> localhost | |+++.+* .. | 2025-07-12 19:13:49.844284 | orchestrator -> localhost | |*=E=*BB+ | 2025-07-12 19:13:49.844336 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-12 19:13:49.844473 | orchestrator -> localhost | ok: Runtime: 0:00:02.355904 2025-07-12 19:13:49.857393 | 2025-07-12 19:13:49.857538 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-12 19:13:49.879228 | orchestrator | ok 2025-07-12 19:13:49.889931 | orchestrator | included: /var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-12 19:13:49.900316 | 2025-07-12 19:13:49.900430 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-12 19:13:49.924358 | orchestrator | skipping: Conditional result was False 2025-07-12 19:13:49.932695 | 2025-07-12 19:13:49.932807 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-12 19:13:50.563632 | orchestrator | changed 2025-07-12 19:13:50.572955 | 2025-07-12 19:13:50.573126 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-12 19:13:50.863740 | orchestrator | ok 2025-07-12 19:13:50.877363 | 2025-07-12 19:13:50.877526 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-12 19:13:51.282452 | orchestrator | ok 2025-07-12 19:13:51.290919 | 2025-07-12 19:13:51.291059 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-12 19:13:51.719412 | orchestrator | ok 2025-07-12 19:13:51.726684 | 2025-07-12 19:13:51.726814 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-12 19:13:51.752495 | orchestrator | skipping: Conditional result was False 2025-07-12 19:13:51.761029 | 2025-07-12 19:13:51.761251 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-12 19:13:52.264806 | orchestrator -> localhost | changed 2025-07-12 19:13:52.281223 | 2025-07-12 19:13:52.281354 | TASK [add-build-sshkey : Add back temp key] 2025-07-12 19:13:52.631142 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/work/b8321735e9ba42e18f9d24de95f698e9_id_rsa (zuul-build-sshkey) 2025-07-12 19:13:52.631697 | orchestrator -> localhost | ok: Runtime: 0:00:00.018912 2025-07-12 19:13:52.647325 | 2025-07-12 19:13:52.647489 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-12 19:13:53.073659 | orchestrator | ok 2025-07-12 19:13:53.081382 | 2025-07-12 19:13:53.081518 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-12 19:13:53.115821 | orchestrator | skipping: Conditional result was False 2025-07-12 19:13:53.189428 | 2025-07-12 19:13:53.189774 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-12 19:13:53.609554 | orchestrator | ok 2025-07-12 19:13:53.630894 | 2025-07-12 19:13:53.631055 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-12 19:13:53.680450 | orchestrator | ok 2025-07-12 19:13:53.691922 | 2025-07-12 19:13:53.692183 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-12 19:13:53.994407 | orchestrator -> localhost | ok 2025-07-12 19:13:54.008367 | 2025-07-12 19:13:54.008517 | TASK [validate-host : Collect information about the host] 2025-07-12 19:13:55.244640 | orchestrator | ok 2025-07-12 19:13:55.258688 | 2025-07-12 19:13:55.258815 | TASK [validate-host : Sanitize hostname] 2025-07-12 19:13:55.343813 | orchestrator | ok 2025-07-12 19:13:55.350170 | 2025-07-12 19:13:55.350299 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-12 19:13:55.913142 | orchestrator -> localhost | changed 2025-07-12 19:13:55.920394 | 2025-07-12 19:13:55.920511 | TASK [validate-host : Collect information about zuul worker] 2025-07-12 19:13:56.396285 | orchestrator | ok 2025-07-12 19:13:56.402273 | 2025-07-12 19:13:56.402407 | TASK [validate-host : Write out all zuul information for each host] 2025-07-12 19:13:56.977002 | orchestrator -> localhost | changed 2025-07-12 19:13:56.988018 | 2025-07-12 19:13:56.988159 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-12 19:13:57.283987 | orchestrator | ok 2025-07-12 19:13:57.292108 | 2025-07-12 19:13:57.292236 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-12 19:14:36.985761 | orchestrator | changed: 2025-07-12 19:14:36.985988 | orchestrator | .d..t...... src/ 2025-07-12 19:14:36.986024 | orchestrator | .d..t...... src/github.com/ 2025-07-12 19:14:36.986049 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-12 19:14:36.986092 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-12 19:14:36.986114 | orchestrator | RedHat.yml 2025-07-12 19:14:36.997655 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-12 19:14:36.997672 | orchestrator | RedHat.yml 2025-07-12 19:14:36.997725 | orchestrator | = 1.53.0"... 2025-07-12 19:14:48.881543 | orchestrator | 19:14:48.881 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-07-12 19:14:48.908127 | orchestrator | 19:14:48.908 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-12 19:14:49.614038 | orchestrator | 19:14:49.613 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-12 19:14:50.709689 | orchestrator | 19:14:50.709 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 19:14:51.772734 | orchestrator | 19:14:51.772 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.0... 2025-07-12 19:14:52.604254 | orchestrator | 19:14:52.604 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.0 (signed, key ID 4F80527A391BEFD2) 2025-07-12 19:14:53.461100 | orchestrator | 19:14:53.460 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-12 19:14:53.920477 | orchestrator | 19:14:53.919 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 19:14:53.920540 | orchestrator | 19:14:53.919 STDOUT terraform: Providers are signed by their developers. 2025-07-12 19:14:53.920551 | orchestrator | 19:14:53.919 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-12 19:14:53.920559 | orchestrator | 19:14:53.919 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-12 19:14:53.920566 | orchestrator | 19:14:53.919 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-12 19:14:53.920578 | orchestrator | 19:14:53.920 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-12 19:14:53.920589 | orchestrator | 19:14:53.920 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-12 19:14:53.920597 | orchestrator | 19:14:53.920 STDOUT terraform: you run "tofu init" in the future. 2025-07-12 19:14:54.656272 | orchestrator | 19:14:54.655 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-12 19:14:54.656318 | orchestrator | 19:14:54.655 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-12 19:14:54.656329 | orchestrator | 19:14:54.656 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-12 19:14:54.656334 | orchestrator | 19:14:54.656 STDOUT terraform: should now work. 2025-07-12 19:14:54.656338 | orchestrator | 19:14:54.656 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-12 19:14:54.656342 | orchestrator | 19:14:54.656 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-12 19:14:54.656347 | orchestrator | 19:14:54.656 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-12 19:14:54.752838 | orchestrator | 19:14:54.752 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-12 19:14:54.752925 | orchestrator | 19:14:54.752 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-12 19:14:54.929344 | orchestrator | 19:14:54.929 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-12 19:14:54.929427 | orchestrator | 19:14:54.929 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-12 19:14:54.929444 | orchestrator | 19:14:54.929 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-12 19:14:54.929461 | orchestrator | 19:14:54.929 STDOUT terraform: for this configuration. 2025-07-12 19:14:55.081518 | orchestrator | 19:14:55.080 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-12 19:14:55.081562 | orchestrator | 19:14:55.080 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-12 19:14:55.178259 | orchestrator | 19:14:55.178 STDOUT terraform: ci.auto.tfvars 2025-07-12 19:14:55.179654 | orchestrator | 19:14:55.179 STDOUT terraform: default_custom.tf 2025-07-12 19:14:55.309581 | orchestrator | 19:14:55.309 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-07-12 19:14:56.162202 | orchestrator | 19:14:56.162 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-12 19:14:56.668042 | orchestrator | 19:14:56.667 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-12 19:14:56.908257 | orchestrator | 19:14:56.906 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-12 19:14:56.908299 | orchestrator | 19:14:56.906 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-12 19:14:56.908305 | orchestrator | 19:14:56.906 STDOUT terraform:  + create 2025-07-12 19:14:56.908310 | orchestrator | 19:14:56.906 STDOUT terraform:  <= read (data resources) 2025-07-12 19:14:56.908314 | orchestrator | 19:14:56.906 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-12 19:14:56.908319 | orchestrator | 19:14:56.906 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-12 19:14:56.908323 | orchestrator | 19:14:56.906 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 19:14:56.908327 | orchestrator | 19:14:56.906 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-12 19:14:56.908331 | orchestrator | 19:14:56.906 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 19:14:56.908335 | orchestrator | 19:14:56.906 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 19:14:56.908339 | orchestrator | 19:14:56.906 STDOUT terraform:  + file = (known after apply) 2025-07-12 19:14:56.908343 | orchestrator | 19:14:56.906 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.908346 | orchestrator | 19:14:56.906 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.908360 | orchestrator | 19:14:56.906 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 19:14:56.908364 | orchestrator | 19:14:56.906 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 19:14:56.908368 | orchestrator | 19:14:56.906 STDOUT terraform:  + most_recent = true 2025-07-12 19:14:56.908372 | orchestrator | 19:14:56.906 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.908376 | orchestrator | 19:14:56.906 STDOUT terraform:  + protected = (known after apply) 2025-07-12 19:14:56.908380 | orchestrator | 19:14:56.906 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.908384 | orchestrator | 19:14:56.906 STDOUT terraform:  + schema = (known after apply) 2025-07-12 19:14:56.908388 | orchestrator | 19:14:56.906 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 19:14:56.908391 | orchestrator | 19:14:56.906 STDOUT terraform:  + tags = (known after apply) 2025-07-12 19:14:56.908395 | orchestrator | 19:14:56.906 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 19:14:56.908399 | orchestrator | 19:14:56.906 STDOUT terraform:  } 2025-07-12 19:14:56.908405 | orchestrator | 19:14:56.906 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-12 19:14:56.908409 | orchestrator | 19:14:56.906 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 19:14:56.908413 | orchestrator | 19:14:56.906 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-12 19:14:56.908417 | orchestrator | 19:14:56.906 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 19:14:56.908420 | orchestrator | 19:14:56.906 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 19:14:56.908424 | orchestrator | 19:14:56.906 STDOUT terraform:  + file = (known after apply) 2025-07-12 19:14:56.908428 | orchestrator | 19:14:56.906 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.908432 | orchestrator | 19:14:56.906 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.908436 | orchestrator | 19:14:56.906 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 19:14:56.908439 | orchestrator | 19:14:56.906 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 19:14:56.908451 | orchestrator | 19:14:56.906 STDOUT terraform:  + most_recent = true 2025-07-12 19:14:56.908456 | orchestrator | 19:14:56.906 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.908460 | orchestrator | 19:14:56.907 STDOUT terraform:  + protected = (known after apply) 2025-07-12 19:14:56.908463 | orchestrator | 19:14:56.907 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.908476 | orchestrator | 19:14:56.907 STDOUT terraform:  + schema = (known after apply) 2025-07-12 19:14:56.908480 | orchestrator | 19:14:56.907 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 19:14:56.908484 | orchestrator | 19:14:56.907 STDOUT terraform:  + tags = (known after apply) 2025-07-12 19:14:56.908488 | orchestrator | 19:14:56.907 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 19:14:56.908491 | orchestrator | 19:14:56.907 STDOUT terraform:  } 2025-07-12 19:14:56.908495 | orchestrator | 19:14:56.907 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-12 19:14:56.908502 | orchestrator | 19:14:56.907 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-12 19:14:56.908506 | orchestrator | 19:14:56.907 STDOUT terraform:  + content = (known after apply) 2025-07-12 19:14:56.908510 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:14:56.908514 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:14:56.908518 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:14:56.908521 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:14:56.908525 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:14:56.908529 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:14:56.908533 | orchestrator | 19:14:56.907 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 19:14:56.908537 | orchestrator | 19:14:56.907 STDOUT terraform:  + file_permission = "0644" 2025-07-12 19:14:56.908541 | orchestrator | 19:14:56.907 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-12 19:14:56.908545 | orchestrator | 19:14:56.907 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.908548 | orchestrator | 19:14:56.907 STDOUT terraform:  } 2025-07-12 19:14:56.908552 | orchestrator | 19:14:56.907 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-12 19:14:56.908556 | orchestrator | 19:14:56.907 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-12 19:14:56.908560 | orchestrator | 19:14:56.907 STDOUT terraform:  + content = (known after apply) 2025-07-12 19:14:56.908564 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:14:56.908567 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:14:56.908571 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:14:56.908575 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:14:56.908578 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:14:56.908582 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:14:56.908586 | orchestrator | 19:14:56.907 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 19:14:56.908590 | orchestrator | 19:14:56.907 STDOUT terraform:  + file_permission = "0644" 2025-07-12 19:14:56.908593 | orchestrator | 19:14:56.907 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-12 19:14:56.908597 | orchestrator | 19:14:56.907 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.908601 | orchestrator | 19:14:56.907 STDOUT terraform:  } 2025-07-12 19:14:56.908607 | orchestrator | 19:14:56.907 STDOUT terraform:  # local_file.inventory will be created 2025-07-12 19:14:56.908611 | orchestrator | 19:14:56.907 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-12 19:14:56.908614 | orchestrator | 19:14:56.907 STDOUT terraform:  + content = (known after apply) 2025-07-12 19:14:56.908621 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:14:56.908625 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:14:56.908631 | orchestrator | 19:14:56.907 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:14:56.908635 | orchestrator | 19:14:56.908 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:14:56.908639 | orchestrator | 19:14:56.908 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:14:56.908643 | orchestrator | 19:14:56.908 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:14:56.908646 | orchestrator | 19:14:56.908 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 19:14:56.908650 | orchestrator | 19:14:56.908 STDOUT terraform:  + file_permission = "0644" 2025-07-12 19:14:56.908654 | orchestrator | 19:14:56.908 STDOUT terraform:  + filename = "inventory.ci" 2025-07-12 19:14:56.908658 | orchestrator | 19:14:56.908 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.908661 | orchestrator | 19:14:56.908 STDOUT terraform:  } 2025-07-12 19:14:56.922250 | orchestrator | 19:14:56.922 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-12 19:14:56.922295 | orchestrator | 19:14:56.922 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-12 19:14:56.922301 | orchestrator | 19:14:56.922 STDOUT terraform:  + content = (sensitive value) 2025-07-12 19:14:56.922310 | orchestrator | 19:14:56.922 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:14:56.922314 | orchestrator | 19:14:56.922 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:14:56.922318 | orchestrator | 19:14:56.922 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:14:56.922333 | orchestrator | 19:14:56.922 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:14:56.922367 | orchestrator | 19:14:56.922 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:14:56.922410 | orchestrator | 19:14:56.922 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:14:56.922424 | orchestrator | 19:14:56.922 STDOUT terraform:  + directory_permission = "0700" 2025-07-12 19:14:56.922447 | orchestrator | 19:14:56.922 STDOUT terraform:  + file_permission = "0600" 2025-07-12 19:14:56.922480 | orchestrator | 19:14:56.922 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-12 19:14:56.922515 | orchestrator | 19:14:56.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.922529 | orchestrator | 19:14:56.922 STDOUT terraform:  } 2025-07-12 19:14:56.922572 | orchestrator | 19:14:56.922 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-12 19:14:56.922590 | orchestrator | 19:14:56.922 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-12 19:14:56.922609 | orchestrator | 19:14:56.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.922623 | orchestrator | 19:14:56.922 STDOUT terraform:  } 2025-07-12 19:14:56.922669 | orchestrator | 19:14:56.922 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-12 19:14:56.922725 | orchestrator | 19:14:56.922 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-12 19:14:56.922748 | orchestrator | 19:14:56.922 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.922772 | orchestrator | 19:14:56.922 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.922818 | orchestrator | 19:14:56.922 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.922840 | orchestrator | 19:14:56.922 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.922873 | orchestrator | 19:14:56.922 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.922916 | orchestrator | 19:14:56.922 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-12 19:14:56.922949 | orchestrator | 19:14:56.922 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.922980 | orchestrator | 19:14:56.922 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.923000 | orchestrator | 19:14:56.922 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.923022 | orchestrator | 19:14:56.922 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.923035 | orchestrator | 19:14:56.923 STDOUT terraform:  } 2025-07-12 19:14:56.923078 | orchestrator | 19:14:56.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-12 19:14:56.923131 | orchestrator | 19:14:56.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:14:56.923153 | orchestrator | 19:14:56.923 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.923175 | orchestrator | 19:14:56.923 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.923210 | orchestrator | 19:14:56.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.923243 | orchestrator | 19:14:56.923 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.923286 | orchestrator | 19:14:56.923 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.923318 | orchestrator | 19:14:56.923 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-12 19:14:56.923363 | orchestrator | 19:14:56.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.923369 | orchestrator | 19:14:56.923 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.923390 | orchestrator | 19:14:56.923 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.923412 | orchestrator | 19:14:56.923 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.923418 | orchestrator | 19:14:56.923 STDOUT terraform:  } 2025-07-12 19:14:56.923465 | orchestrator | 19:14:56.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-12 19:14:56.923518 | orchestrator | 19:14:56.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:14:56.923541 | orchestrator | 19:14:56.923 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.923564 | orchestrator | 19:14:56.923 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.923598 | orchestrator | 19:14:56.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.923630 | orchestrator | 19:14:56.923 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.923673 | orchestrator | 19:14:56.923 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.923705 | orchestrator | 19:14:56.923 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-12 19:14:56.923744 | orchestrator | 19:14:56.923 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.923760 | orchestrator | 19:14:56.923 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.923780 | orchestrator | 19:14:56.923 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.923802 | orchestrator | 19:14:56.923 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.923816 | orchestrator | 19:14:56.923 STDOUT terraform:  } 2025-07-12 19:14:56.923859 | orchestrator | 19:14:56.923 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-12 19:14:56.923907 | orchestrator | 19:14:56.923 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:14:56.923934 | orchestrator | 19:14:56.923 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.923957 | orchestrator | 19:14:56.923 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.924011 | orchestrator | 19:14:56.923 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.924045 | orchestrator | 19:14:56.924 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.924079 | orchestrator | 19:14:56.924 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.924120 | orchestrator | 19:14:56.924 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-12 19:14:56.924155 | orchestrator | 19:14:56.924 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.924174 | orchestrator | 19:14:56.924 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.924197 | orchestrator | 19:14:56.924 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.924229 | orchestrator | 19:14:56.924 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.924234 | orchestrator | 19:14:56.924 STDOUT terraform:  } 2025-07-12 19:14:56.924273 | orchestrator | 19:14:56.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-12 19:14:56.924315 | orchestrator | 19:14:56.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:14:56.924347 | orchestrator | 19:14:56.924 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.924376 | orchestrator | 19:14:56.924 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.924404 | orchestrator | 19:14:56.924 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.924437 | orchestrator | 19:14:56.924 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.924471 | orchestrator | 19:14:56.924 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.924513 | orchestrator | 19:14:56.924 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-12 19:14:56.924548 | orchestrator | 19:14:56.924 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.924567 | orchestrator | 19:14:56.924 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.924591 | orchestrator | 19:14:56.924 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.924621 | orchestrator | 19:14:56.924 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.924626 | orchestrator | 19:14:56.924 STDOUT terraform:  } 2025-07-12 19:14:56.924666 | orchestrator | 19:14:56.924 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-12 19:14:56.924709 | orchestrator | 19:14:56.924 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:14:56.924739 | orchestrator | 19:14:56.924 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.924768 | orchestrator | 19:14:56.924 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.924796 | orchestrator | 19:14:56.924 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.924832 | orchestrator | 19:14:56.924 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.924866 | orchestrator | 19:14:56.924 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.924905 | orchestrator | 19:14:56.924 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-12 19:14:56.924940 | orchestrator | 19:14:56.924 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.924956 | orchestrator | 19:14:56.924 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.924987 | orchestrator | 19:14:56.924 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.925018 | orchestrator | 19:14:56.924 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.925023 | orchestrator | 19:14:56.925 STDOUT terraform:  } 2025-07-12 19:14:56.925061 | orchestrator | 19:14:56.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-12 19:14:56.925103 | orchestrator | 19:14:56.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:14:56.925135 | orchestrator | 19:14:56.925 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.925157 | orchestrator | 19:14:56.925 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.925191 | orchestrator | 19:14:56.925 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.925224 | orchestrator | 19:14:56.925 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.925259 | orchestrator | 19:14:56.925 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.925299 | orchestrator | 19:14:56.925 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-12 19:14:56.925333 | orchestrator | 19:14:56.925 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.925352 | orchestrator | 19:14:56.925 STDOUT terraform:  + size = 80 2025-07-12 19:14:56.925374 | orchestrator | 19:14:56.925 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.925402 | orchestrator | 19:14:56.925 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.925407 | orchestrator | 19:14:56.925 STDOUT terraform:  } 2025-07-12 19:14:56.925447 | orchestrator | 19:14:56.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-12 19:14:56.925497 | orchestrator | 19:14:56.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.925521 | orchestrator | 19:14:56.925 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.925540 | orchestrator | 19:14:56.925 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.925582 | orchestrator | 19:14:56.925 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.925607 | orchestrator | 19:14:56.925 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.925650 | orchestrator | 19:14:56.925 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-12 19:14:56.925677 | orchestrator | 19:14:56.925 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.925695 | orchestrator | 19:14:56.925 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.925719 | orchestrator | 19:14:56.925 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.925744 | orchestrator | 19:14:56.925 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.925750 | orchestrator | 19:14:56.925 STDOUT terraform:  } 2025-07-12 19:14:56.925791 | orchestrator | 19:14:56.925 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-12 19:14:56.925830 | orchestrator | 19:14:56.925 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.925862 | orchestrator | 19:14:56.925 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.925895 | orchestrator | 19:14:56.925 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.925918 | orchestrator | 19:14:56.925 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.925951 | orchestrator | 19:14:56.925 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.926149 | orchestrator | 19:14:56.925 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-12 19:14:56.926244 | orchestrator | 19:14:56.925 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.926264 | orchestrator | 19:14:56.926 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.926276 | orchestrator | 19:14:56.926 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.926287 | orchestrator | 19:14:56.926 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.926299 | orchestrator | 19:14:56.926 STDOUT terraform:  } 2025-07-12 19:14:56.926334 | orchestrator | 19:14:56.926 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-12 19:14:56.926348 | orchestrator | 19:14:56.926 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.926359 | orchestrator | 19:14:56.926 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.926400 | orchestrator | 19:14:56.926 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.926419 | orchestrator | 19:14:56.926 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.926431 | orchestrator | 19:14:56.926 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.926443 | orchestrator | 19:14:56.926 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-12 19:14:56.926466 | orchestrator | 19:14:56.926 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.926485 | orchestrator | 19:14:56.926 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.926514 | orchestrator | 19:14:56.926 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.926526 | orchestrator | 19:14:56.926 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.926537 | orchestrator | 19:14:56.926 STDOUT terraform:  } 2025-07-12 19:14:56.926548 | orchestrator | 19:14:56.926 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-12 19:14:56.926563 | orchestrator | 19:14:56.926 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.926574 | orchestrator | 19:14:56.926 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.926585 | orchestrator | 19:14:56.926 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.926595 | orchestrator | 19:14:56.926 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.926618 | orchestrator | 19:14:56.926 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.926633 | orchestrator | 19:14:56.926 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-12 19:14:56.926687 | orchestrator | 19:14:56.926 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.926701 | orchestrator | 19:14:56.926 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.926716 | orchestrator | 19:14:56.926 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.926727 | orchestrator | 19:14:56.926 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.926741 | orchestrator | 19:14:56.926 STDOUT terraform:  } 2025-07-12 19:14:56.926838 | orchestrator | 19:14:56.926 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-12 19:14:56.926855 | orchestrator | 19:14:56.926 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.926889 | orchestrator | 19:14:56.926 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.926904 | orchestrator | 19:14:56.926 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.926916 | orchestrator | 19:14:56.926 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.926929 | orchestrator | 19:14:56.926 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.926943 | orchestrator | 19:14:56.926 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-12 19:14:56.927016 | orchestrator | 19:14:56.926 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.927041 | orchestrator | 19:14:56.926 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.927055 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.927067 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.927077 | orchestrator | 19:14:56.927 STDOUT terraform:  } 2025-07-12 19:14:56.927092 | orchestrator | 19:14:56.927 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-12 19:14:56.927144 | orchestrator | 19:14:56.927 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.927161 | orchestrator | 19:14:56.927 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.927175 | orchestrator | 19:14:56.927 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.927261 | orchestrator | 19:14:56.927 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.927284 | orchestrator | 19:14:56.927 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.927309 | orchestrator | 19:14:56.927 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-12 19:14:56.927354 | orchestrator | 19:14:56.927 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.927388 | orchestrator | 19:14:56.927 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.927406 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.927426 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.927444 | orchestrator | 19:14:56.927 STDOUT terraform:  } 2025-07-12 19:14:56.927469 | orchestrator | 19:14:56.927 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-12 19:14:56.927488 | orchestrator | 19:14:56.927 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.927508 | orchestrator | 19:14:56.927 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.927524 | orchestrator | 19:14:56.927 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.927535 | orchestrator | 19:14:56.927 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.927549 | orchestrator | 19:14:56.927 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.927598 | orchestrator | 19:14:56.927 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-12 19:14:56.927614 | orchestrator | 19:14:56.927 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.927629 | orchestrator | 19:14:56.927 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.927651 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.927667 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.927681 | orchestrator | 19:14:56.927 STDOUT terraform:  } 2025-07-12 19:14:56.927738 | orchestrator | 19:14:56.927 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-12 19:14:56.927756 | orchestrator | 19:14:56.927 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.927781 | orchestrator | 19:14:56.927 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.927822 | orchestrator | 19:14:56.927 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.927838 | orchestrator | 19:14:56.927 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.927879 | orchestrator | 19:14:56.927 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.927907 | orchestrator | 19:14:56.927 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-12 19:14:56.928011 | orchestrator | 19:14:56.927 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.928027 | orchestrator | 19:14:56.927 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.928033 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.928041 | orchestrator | 19:14:56.927 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.928046 | orchestrator | 19:14:56.928 STDOUT terraform:  } 2025-07-12 19:14:56.928057 | orchestrator | 19:14:56.928 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-12 19:14:56.928097 | orchestrator | 19:14:56.928 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:14:56.928130 | orchestrator | 19:14:56.928 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:14:56.928151 | orchestrator | 19:14:56.928 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.928185 | orchestrator | 19:14:56.928 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.928218 | orchestrator | 19:14:56.928 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:14:56.928254 | orchestrator | 19:14:56.928 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-12 19:14:56.928287 | orchestrator | 19:14:56.928 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.928306 | orchestrator | 19:14:56.928 STDOUT terraform:  + size = 20 2025-07-12 19:14:56.928327 | orchestrator | 19:14:56.928 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:14:56.928357 | orchestrator | 19:14:56.928 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:14:56.928362 | orchestrator | 19:14:56.928 STDOUT terraform:  } 2025-07-12 19:14:56.928404 | orchestrator | 19:14:56.928 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-12 19:14:56.928437 | orchestrator | 19:14:56.928 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-12 19:14:56.928470 | orchestrator | 19:14:56.928 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.928502 | orchestrator | 19:14:56.928 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.928545 | orchestrator | 19:14:56.928 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.928570 | orchestrator | 19:14:56.928 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.928590 | orchestrator | 19:14:56.928 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.928622 | orchestrator | 19:14:56.928 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.928655 | orchestrator | 19:14:56.928 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.928678 | orchestrator | 19:14:56.928 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.928699 | orchestrator | 19:14:56.928 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-12 19:14:56.928784 | orchestrator | 19:14:56.928 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.928804 | orchestrator | 19:14:56.928 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.928808 | orchestrator | 19:14:56.928 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.928814 | orchestrator | 19:14:56.928 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.928842 | orchestrator | 19:14:56.928 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.928865 | orchestrator | 19:14:56.928 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.933805 | orchestrator | 19:14:56.928 STDOUT terraform:  + name = "testbed-manager" 2025-07-12 19:14:56.933842 | orchestrator | 19:14:56.928 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.933851 | orchestrator | 19:14:56.928 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.933858 | orchestrator | 19:14:56.928 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.933866 | orchestrator | 19:14:56.928 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.933873 | orchestrator | 19:14:56.928 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.933881 | orchestrator | 19:14:56.929 STDOUT terraform:  + user_data = (sensitive value) 2025-07-12 19:14:56.933889 | orchestrator | 19:14:56.929 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.933897 | orchestrator | 19:14:56.929 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.933905 | orchestrator | 19:14:56.929 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.933912 | orchestrator | 19:14:56.929 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.933920 | orchestrator | 19:14:56.929 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.933928 | orchestrator | 19:14:56.929 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.933936 | orchestrator | 19:14:56.929 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.933943 | orchestrator | 19:14:56.929 STDOUT terraform:  } 2025-07-12 19:14:56.933951 | orchestrator | 19:14:56.929 STDOUT terraform:  + network { 2025-07-12 19:14:56.933959 | orchestrator | 19:14:56.929 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.933967 | orchestrator | 19:14:56.929 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.933989 | orchestrator | 19:14:56.929 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.933997 | orchestrator | 19:14:56.929 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.934028 | orchestrator | 19:14:56.929 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.934037 | orchestrator | 19:14:56.929 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.934045 | orchestrator | 19:14:56.929 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.934052 | orchestrator | 19:14:56.929 STDOUT terraform:  } 2025-07-12 19:14:56.934060 | orchestrator | 19:14:56.929 STDOUT terraform:  } 2025-07-12 19:14:56.934068 | orchestrator | 19:14:56.929 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-12 19:14:56.934077 | orchestrator | 19:14:56.929 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:14:56.934085 | orchestrator | 19:14:56.929 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.934098 | orchestrator | 19:14:56.929 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.934106 | orchestrator | 19:14:56.929 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.934114 | orchestrator | 19:14:56.929 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.934121 | orchestrator | 19:14:56.929 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.934129 | orchestrator | 19:14:56.929 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.934136 | orchestrator | 19:14:56.929 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.934145 | orchestrator | 19:14:56.929 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.934153 | orchestrator | 19:14:56.929 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:14:56.934161 | orchestrator | 19:14:56.929 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.934169 | orchestrator | 19:14:56.929 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.934184 | orchestrator | 19:14:56.929 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.934192 | orchestrator | 19:14:56.929 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.934200 | orchestrator | 19:14:56.929 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.934208 | orchestrator | 19:14:56.929 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.934217 | orchestrator | 19:14:56.929 STDOUT terraform:  + name = "testbed-node-0" 2025-07-12 19:14:56.934224 | orchestrator | 19:14:56.929 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.934232 | orchestrator | 19:14:56.929 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.934239 | orchestrator | 19:14:56.929 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.934248 | orchestrator | 19:14:56.929 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.934256 | orchestrator | 19:14:56.929 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.934264 | orchestrator | 19:14:56.930 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:14:56.934272 | orchestrator | 19:14:56.930 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.934285 | orchestrator | 19:14:56.930 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.934293 | orchestrator | 19:14:56.930 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.934301 | orchestrator | 19:14:56.930 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.934308 | orchestrator | 19:14:56.930 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.934317 | orchestrator | 19:14:56.930 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.934325 | orchestrator | 19:14:56.930 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.934333 | orchestrator | 19:14:56.930 STDOUT terraform:  } 2025-07-12 19:14:56.934340 | orchestrator | 19:14:56.930 STDOUT terraform:  + network { 2025-07-12 19:14:56.934348 | orchestrator | 19:14:56.930 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.934355 | orchestrator | 19:14:56.930 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.934363 | orchestrator | 19:14:56.930 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.934370 | orchestrator | 19:14:56.930 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.934378 | orchestrator | 19:14:56.930 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.934386 | orchestrator | 19:14:56.930 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.934394 | orchestrator | 19:14:56.930 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.934402 | orchestrator | 19:14:56.931 STDOUT terraform:  } 2025-07-12 19:14:56.934410 | orchestrator | 19:14:56.931 STDOUT terraform:  } 2025-07-12 19:14:56.934417 | orchestrator | 19:14:56.931 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-12 19:14:56.934425 | orchestrator | 19:14:56.931 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:14:56.934433 | orchestrator | 19:14:56.931 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.934440 | orchestrator | 19:14:56.931 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.934448 | orchestrator | 19:14:56.931 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.934456 | orchestrator | 19:14:56.931 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.934464 | orchestrator | 19:14:56.931 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.934472 | orchestrator | 19:14:56.931 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.934480 | orchestrator | 19:14:56.931 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.934492 | orchestrator | 19:14:56.931 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.934500 | orchestrator | 19:14:56.931 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:14:56.934507 | orchestrator | 19:14:56.931 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.934519 | orchestrator | 19:14:56.931 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.934532 | orchestrator | 19:14:56.931 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.934540 | orchestrator | 19:14:56.931 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.934548 | orchestrator | 19:14:56.932 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.934555 | orchestrator | 19:14:56.932 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.934563 | orchestrator | 19:14:56.932 STDOUT terraform:  + name = "testbed-node-1" 2025-07-12 19:14:56.934570 | orchestrator | 19:14:56.932 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.934579 | orchestrator | 19:14:56.932 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.934587 | orchestrator | 19:14:56.932 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.934595 | orchestrator | 19:14:56.932 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.934602 | orchestrator | 19:14:56.932 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.934610 | orchestrator | 19:14:56.932 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:14:56.934617 | orchestrator | 19:14:56.932 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.934625 | orchestrator | 19:14:56.932 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.934636 | orchestrator | 19:14:56.932 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.934645 | orchestrator | 19:14:56.932 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.934653 | orchestrator | 19:14:56.932 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.934660 | orchestrator | 19:14:56.932 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.934668 | orchestrator | 19:14:56.932 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.934675 | orchestrator | 19:14:56.932 STDOUT terraform:  } 2025-07-12 19:14:56.934684 | orchestrator | 19:14:56.932 STDOUT terraform:  + network { 2025-07-12 19:14:56.934692 | orchestrator | 19:14:56.933 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.934700 | orchestrator | 19:14:56.933 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.934708 | orchestrator | 19:14:56.933 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.934715 | orchestrator | 19:14:56.933 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.934723 | orchestrator | 19:14:56.933 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.934730 | orchestrator | 19:14:56.933 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.934738 | orchestrator | 19:14:56.933 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.934745 | orchestrator | 19:14:56.933 STDOUT terraform:  } 2025-07-12 19:14:56.934754 | orchestrator | 19:14:56.933 STDOUT terraform:  } 2025-07-12 19:14:56.934762 | orchestrator | 19:14:56.933 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-12 19:14:56.934770 | orchestrator | 19:14:56.933 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:14:56.934782 | orchestrator | 19:14:56.933 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.934790 | orchestrator | 19:14:56.933 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.934797 | orchestrator | 19:14:56.933 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.934808 | orchestrator | 19:14:56.933 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.934816 | orchestrator | 19:14:56.933 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.934825 | orchestrator | 19:14:56.933 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.934833 | orchestrator | 19:14:56.933 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.934841 | orchestrator | 19:14:56.934 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.934848 | orchestrator | 19:14:56.934 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:14:56.934856 | orchestrator | 19:14:56.934 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.934864 | orchestrator | 19:14:56.934 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.934872 | orchestrator | 19:14:56.934 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.934880 | orchestrator | 19:14:56.934 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.934887 | orchestrator | 19:14:56.934 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.934894 | orchestrator | 19:14:56.934 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.934902 | orchestrator | 19:14:56.934 STDOUT terraform:  + name = "testbed-node-2" 2025-07-12 19:14:56.934909 | orchestrator | 19:14:56.934 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.934917 | orchestrator | 19:14:56.934 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.934928 | orchestrator | 19:14:56.934 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.934935 | orchestrator | 19:14:56.934 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.935507 | orchestrator | 19:14:56.934 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.935528 | orchestrator | 19:14:56.935 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:14:56.935537 | orchestrator | 19:14:56.935 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.935545 | orchestrator | 19:14:56.935 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.935552 | orchestrator | 19:14:56.935 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.935565 | orchestrator | 19:14:56.935 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.935573 | orchestrator | 19:14:56.935 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.935580 | orchestrator | 19:14:56.935 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.935588 | orchestrator | 19:14:56.935 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.935603 | orchestrator | 19:14:56.935 STDOUT terraform:  } 2025-07-12 19:14:56.935613 | orchestrator | 19:14:56.935 STDOUT terraform:  + network { 2025-07-12 19:14:56.935621 | orchestrator | 19:14:56.935 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.935629 | orchestrator | 19:14:56.935 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.935734 | orchestrator | 19:14:56.935 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.935759 | orchestrator | 19:14:56.935 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.935822 | orchestrator | 19:14:56.935 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.935885 | orchestrator | 19:14:56.935 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.935946 | orchestrator | 19:14:56.935 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.935984 | orchestrator | 19:14:56.935 STDOUT terraform:  } 2025-07-12 19:14:56.936009 | orchestrator | 19:14:56.935 STDOUT terraform:  } 2025-07-12 19:14:56.936095 | orchestrator | 19:14:56.936 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-12 19:14:56.936179 | orchestrator | 19:14:56.936 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:14:56.936246 | orchestrator | 19:14:56.936 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.936316 | orchestrator | 19:14:56.936 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.936392 | orchestrator | 19:14:56.936 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.936452 | orchestrator | 19:14:56.936 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.936496 | orchestrator | 19:14:56.936 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.936538 | orchestrator | 19:14:56.936 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.936607 | orchestrator | 19:14:56.936 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.936676 | orchestrator | 19:14:56.936 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.936733 | orchestrator | 19:14:56.936 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:14:56.936777 | orchestrator | 19:14:56.936 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.936843 | orchestrator | 19:14:56.936 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.936913 | orchestrator | 19:14:56.936 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.936994 | orchestrator | 19:14:56.936 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.937063 | orchestrator | 19:14:56.936 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.937112 | orchestrator | 19:14:56.937 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.937172 | orchestrator | 19:14:56.937 STDOUT terraform:  + name = "testbed-node-3" 2025-07-12 19:14:56.937220 | orchestrator | 19:14:56.937 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.937288 | orchestrator | 19:14:56.937 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.937356 | orchestrator | 19:14:56.937 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.937401 | orchestrator | 19:14:56.937 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.937470 | orchestrator | 19:14:56.937 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.937570 | orchestrator | 19:14:56.937 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:14:56.937604 | orchestrator | 19:14:56.937 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.937653 | orchestrator | 19:14:56.937 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.937708 | orchestrator | 19:14:56.937 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.937766 | orchestrator | 19:14:56.937 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.937823 | orchestrator | 19:14:56.937 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.937882 | orchestrator | 19:14:56.937 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.937959 | orchestrator | 19:14:56.937 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.938027 | orchestrator | 19:14:56.937 STDOUT terraform:  } 2025-07-12 19:14:56.938071 | orchestrator | 19:14:56.938 STDOUT terraform:  + network { 2025-07-12 19:14:56.942002 | orchestrator | 19:14:56.938 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.942050 | orchestrator | 19:14:56.938 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.942057 | orchestrator | 19:14:56.938 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.942064 | orchestrator | 19:14:56.938 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.942071 | orchestrator | 19:14:56.938 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.942077 | orchestrator | 19:14:56.938 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.942084 | orchestrator | 19:14:56.938 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.942091 | orchestrator | 19:14:56.938 STDOUT terraform:  } 2025-07-12 19:14:56.942097 | orchestrator | 19:14:56.938 STDOUT terraform:  } 2025-07-12 19:14:56.942104 | orchestrator | 19:14:56.938 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-12 19:14:56.942111 | orchestrator | 19:14:56.938 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:14:56.942118 | orchestrator | 19:14:56.938 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.942124 | orchestrator | 19:14:56.938 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.942131 | orchestrator | 19:14:56.938 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.942137 | orchestrator | 19:14:56.938 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.942143 | orchestrator | 19:14:56.938 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.942159 | orchestrator | 19:14:56.938 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.942170 | orchestrator | 19:14:56.938 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.942177 | orchestrator | 19:14:56.938 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.942184 | orchestrator | 19:14:56.938 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:14:56.942190 | orchestrator | 19:14:56.939 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.942197 | orchestrator | 19:14:56.939 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.942204 | orchestrator | 19:14:56.939 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.942211 | orchestrator | 19:14:56.939 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.942221 | orchestrator | 19:14:56.939 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.942228 | orchestrator | 19:14:56.939 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.942235 | orchestrator | 19:14:56.939 STDOUT terraform:  + name = "testbed-node-4" 2025-07-12 19:14:56.942242 | orchestrator | 19:14:56.939 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.942249 | orchestrator | 19:14:56.939 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.942255 | orchestrator | 19:14:56.939 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.942261 | orchestrator | 19:14:56.939 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.942268 | orchestrator | 19:14:56.939 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.942275 | orchestrator | 19:14:56.939 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:14:56.942282 | orchestrator | 19:14:56.939 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.942289 | orchestrator | 19:14:56.939 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.942296 | orchestrator | 19:14:56.939 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.942302 | orchestrator | 19:14:56.939 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.942316 | orchestrator | 19:14:56.939 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.942323 | orchestrator | 19:14:56.939 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.942330 | orchestrator | 19:14:56.940 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.942337 | orchestrator | 19:14:56.940 STDOUT terraform:  } 2025-07-12 19:14:56.942343 | orchestrator | 19:14:56.940 STDOUT terraform:  + network { 2025-07-12 19:14:56.942350 | orchestrator | 19:14:56.940 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.942357 | orchestrator | 19:14:56.940 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.942363 | orchestrator | 19:14:56.940 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.942370 | orchestrator | 19:14:56.940 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.942381 | orchestrator | 19:14:56.940 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.942388 | orchestrator | 19:14:56.940 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.942395 | orchestrator | 19:14:56.940 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.942402 | orchestrator | 19:14:56.940 STDOUT terraform:  } 2025-07-12 19:14:56.942408 | orchestrator | 19:14:56.940 STDOUT terraform:  } 2025-07-12 19:14:56.942415 | orchestrator | 19:14:56.940 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-12 19:14:56.946115 | orchestrator | 19:14:56.940 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:14:56.946137 | orchestrator | 19:14:56.945 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:14:56.946145 | orchestrator | 19:14:56.945 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:14:56.946152 | orchestrator | 19:14:56.945 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:14:56.946159 | orchestrator | 19:14:56.945 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.946166 | orchestrator | 19:14:56.945 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:14:56.946172 | orchestrator | 19:14:56.945 STDOUT terraform:  + config_drive = true 2025-07-12 19:14:56.946178 | orchestrator | 19:14:56.945 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:14:56.946185 | orchestrator | 19:14:56.945 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:14:56.946199 | orchestrator | 19:14:56.945 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:14:56.946206 | orchestrator | 19:14:56.945 STDOUT terraform:  + force_delete = false 2025-07-12 19:14:56.946213 | orchestrator | 19:14:56.945 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:14:56.946219 | orchestrator | 19:14:56.945 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.946226 | orchestrator | 19:14:56.945 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:14:56.946234 | orchestrator | 19:14:56.945 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:14:56.946240 | orchestrator | 19:14:56.945 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:14:56.946247 | orchestrator | 19:14:56.945 STDOUT terraform:  + name = "testbed-node-5" 2025-07-12 19:14:56.946254 | orchestrator | 19:14:56.945 STDOUT terraform:  + power_state = "active" 2025-07-12 19:14:56.946261 | orchestrator | 19:14:56.945 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.946268 | orchestrator | 19:14:56.945 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:14:56.946275 | orchestrator | 19:14:56.946 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:14:56.946285 | orchestrator | 19:14:56.946 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:14:56.946293 | orchestrator | 19:14:56.946 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:14:56.946300 | orchestrator | 19:14:56.946 STDOUT terraform:  + block_device { 2025-07-12 19:14:56.946319 | orchestrator | 19:14:56.946 STDOUT terraform:  + boot_index = 0 2025-07-12 19:14:56.946626 | orchestrator | 19:14:56.946 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:14:56.946650 | orchestrator | 19:14:56.946 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:14:56.946658 | orchestrator | 19:14:56.946 STDOUT terraform:  + multiattach = false 2025-07-12 19:14:56.946664 | orchestrator | 19:14:56.946 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:14:56.946671 | orchestrator | 19:14:56.946 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.946677 | orchestrator | 19:14:56.946 STDOUT terraform:  } 2025-07-12 19:14:56.946684 | orchestrator | 19:14:56.946 STDOUT terraform:  + network { 2025-07-12 19:14:56.946691 | orchestrator | 19:14:56.946 STDOUT terraform:  + access_network = false 2025-07-12 19:14:56.946701 | orchestrator | 19:14:56.946 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:14:56.946707 | orchestrator | 19:14:56.946 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:14:56.950045 | orchestrator | 19:14:56.946 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:14:56.950069 | orchestrator | 19:14:56.946 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:14:56.950085 | orchestrator | 19:14:56.946 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:14:56.950091 | orchestrator | 19:14:56.946 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:14:56.950098 | orchestrator | 19:14:56.946 STDOUT terraform:  } 2025-07-12 19:14:56.950105 | orchestrator | 19:14:56.946 STDOUT terraform:  } 2025-07-12 19:14:56.950111 | orchestrator | 19:14:56.946 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-12 19:14:56.950118 | orchestrator | 19:14:56.947 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-12 19:14:56.950124 | orchestrator | 19:14:56.947 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-12 19:14:56.950131 | orchestrator | 19:14:56.947 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950138 | orchestrator | 19:14:56.947 STDOUT terraform:  + name = "testbed" 2025-07-12 19:14:56.950145 | orchestrator | 19:14:56.947 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 19:14:56.950151 | orchestrator | 19:14:56.947 STDOUT terraform:  + public_key = (known after apply) 2025-07-12 19:14:56.950157 | orchestrator | 19:14:56.947 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950167 | orchestrator | 19:14:56.947 STDOUT terraform:  + user_id = (known after apply) 2025-07-12 19:14:56.950174 | orchestrator | 19:14:56.947 STDOUT terraform:  } 2025-07-12 19:14:56.950181 | orchestrator | 19:14:56.947 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-12 19:14:56.950189 | orchestrator | 19:14:56.947 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950195 | orchestrator | 19:14:56.947 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950201 | orchestrator | 19:14:56.947 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950220 | orchestrator | 19:14:56.947 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950227 | orchestrator | 19:14:56.947 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950234 | orchestrator | 19:14:56.947 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950240 | orchestrator | 19:14:56.947 STDOUT terraform:  } 2025-07-12 19:14:56.950247 | orchestrator | 19:14:56.947 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-12 19:14:56.950253 | orchestrator | 19:14:56.947 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950260 | orchestrator | 19:14:56.947 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950266 | orchestrator | 19:14:56.948 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950273 | orchestrator | 19:14:56.948 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950280 | orchestrator | 19:14:56.948 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950287 | orchestrator | 19:14:56.948 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950293 | orchestrator | 19:14:56.948 STDOUT terraform:  } 2025-07-12 19:14:56.950299 | orchestrator | 19:14:56.948 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-12 19:14:56.950306 | orchestrator | 19:14:56.948 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950312 | orchestrator | 19:14:56.948 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950319 | orchestrator | 19:14:56.948 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950326 | orchestrator | 19:14:56.948 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950338 | orchestrator | 19:14:56.948 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950345 | orchestrator | 19:14:56.948 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950351 | orchestrator | 19:14:56.948 STDOUT terraform:  } 2025-07-12 19:14:56.950358 | orchestrator | 19:14:56.948 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-12 19:14:56.950364 | orchestrator | 19:14:56.948 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950372 | orchestrator | 19:14:56.948 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950379 | orchestrator | 19:14:56.948 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950385 | orchestrator | 19:14:56.948 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950392 | orchestrator | 19:14:56.948 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950399 | orchestrator | 19:14:56.949 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950405 | orchestrator | 19:14:56.949 STDOUT terraform:  } 2025-07-12 19:14:56.950412 | orchestrator | 19:14:56.949 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-12 19:14:56.950423 | orchestrator | 19:14:56.949 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950429 | orchestrator | 19:14:56.949 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950435 | orchestrator | 19:14:56.949 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950442 | orchestrator | 19:14:56.949 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950448 | orchestrator | 19:14:56.949 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950454 | orchestrator | 19:14:56.949 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950461 | orchestrator | 19:14:56.949 STDOUT terraform:  } 2025-07-12 19:14:56.950467 | orchestrator | 19:14:56.949 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-12 19:14:56.950474 | orchestrator | 19:14:56.949 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950481 | orchestrator | 19:14:56.949 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950488 | orchestrator | 19:14:56.949 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950494 | orchestrator | 19:14:56.949 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950500 | orchestrator | 19:14:56.949 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950507 | orchestrator | 19:14:56.949 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950513 | orchestrator | 19:14:56.949 STDOUT terraform:  } 2025-07-12 19:14:56.950520 | orchestrator | 19:14:56.949 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-12 19:14:56.950526 | orchestrator | 19:14:56.950 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.950532 | orchestrator | 19:14:56.950 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.950539 | orchestrator | 19:14:56.950 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.950545 | orchestrator | 19:14:56.950 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.950551 | orchestrator | 19:14:56.950 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.950560 | orchestrator | 19:14:56.950 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.950567 | orchestrator | 19:14:56.950 STDOUT terraform:  } 2025-07-12 19:14:56.950574 | orchestrator | 19:14:56.950 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-12 19:14:56.950583 | orchestrator | 19:14:56.950 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.951708 | orchestrator | 19:14:56.950 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.951728 | orchestrator | 19:14:56.950 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.951735 | orchestrator | 19:14:56.950 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.951741 | orchestrator | 19:14:56.950 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.951757 | orchestrator | 19:14:56.950 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.951764 | orchestrator | 19:14:56.950 STDOUT terraform:  } 2025-07-12 19:14:56.951771 | orchestrator | 19:14:56.950 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-12 19:14:56.951778 | orchestrator | 19:14:56.950 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:14:56.951789 | orchestrator | 19:14:56.951 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:14:56.951796 | orchestrator | 19:14:56.951 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.951802 | orchestrator | 19:14:56.951 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:14:56.951808 | orchestrator | 19:14:56.951 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.951814 | orchestrator | 19:14:56.951 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:14:56.951821 | orchestrator | 19:14:56.951 STDOUT terraform:  } 2025-07-12 19:14:56.951842 | orchestrator | 19:14:56.951 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-12 19:14:56.951849 | orchestrator | 19:14:56.951 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-12 19:14:56.951856 | orchestrator | 19:14:56.951 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 19:14:56.951863 | orchestrator | 19:14:56.951 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-12 19:14:56.951869 | orchestrator | 19:14:56.951 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.951876 | orchestrator | 19:14:56.951 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 19:14:56.951882 | orchestrator | 19:14:56.951 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.951889 | orchestrator | 19:14:56.951 STDOUT terraform:  } 2025-07-12 19:14:56.951900 | orchestrator | 19:14:56.951 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-12 19:14:56.951907 | orchestrator | 19:14:56.951 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-12 19:14:56.951913 | orchestrator | 19:14:56.951 STDOUT terraform:  + address = (known after apply) 2025-07-12 19:14:56.952064 | orchestrator | 19:14:56.951 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.952079 | orchestrator | 19:14:56.951 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 19:14:56.952086 | orchestrator | 19:14:56.951 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.952095 | orchestrator | 19:14:56.952 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 19:14:56.953224 | orchestrator | 19:14:56.952 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.953243 | orchestrator | 19:14:56.952 STDOUT terraform:  + pool = "public" 2025-07-12 19:14:56.953250 | orchestrator | 19:14:56.952 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 19:14:56.953257 | orchestrator | 19:14:56.952 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.953264 | orchestrator | 19:14:56.952 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.953279 | orchestrator | 19:14:56.952 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.953285 | orchestrator | 19:14:56.952 STDOUT terraform:  } 2025-07-12 19:14:56.953292 | orchestrator | 19:14:56.952 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-12 19:14:56.953298 | orchestrator | 19:14:56.952 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-12 19:14:56.953305 | orchestrator | 19:14:56.952 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.953311 | orchestrator | 19:14:56.952 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.953317 | orchestrator | 19:14:56.952 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 19:14:56.953324 | orchestrator | 19:14:56.952 STDOUT terraform:  + "nova", 2025-07-12 19:14:56.953331 | orchestrator | 19:14:56.952 STDOUT terraform:  ] 2025-07-12 19:14:56.953338 | orchestrator | 19:14:56.952 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 19:14:56.953344 | orchestrator | 19:14:56.952 STDOUT terraform:  + external = (known after apply) 2025-07-12 19:14:56.953351 | orchestrator | 19:14:56.952 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.953357 | orchestrator | 19:14:56.952 STDOUT terraform:  + mtu = (known after apply) 2025-07-12 19:14:56.953363 | orchestrator | 19:14:56.952 STDOUT terraform:  + name = "net-testbed-management" 2025-07-12 19:14:56.953370 | orchestrator | 19:14:56.953 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.953376 | orchestrator | 19:14:56.953 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.953382 | orchestrator | 19:14:56.953 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.953395 | orchestrator | 19:14:56.953 STDOUT terraform:  + shared = (known after apply) 2025-07-12 19:14:56.953403 | orchestrator | 19:14:56.953 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.953409 | orchestrator | 19:14:56.953 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-12 19:14:56.955305 | orchestrator | 19:14:56.953 STDOUT terraform:  + segments (known after apply) 2025-07-12 19:14:56.955325 | orchestrator | 19:14:56.953 STDOUT terraform:  } 2025-07-12 19:14:56.955330 | orchestrator | 19:14:56.953 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-12 19:14:56.955334 | orchestrator | 19:14:56.953 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-12 19:14:56.955338 | orchestrator | 19:14:56.953 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.955342 | orchestrator | 19:14:56.953 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.955346 | orchestrator | 19:14:56.953 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.955350 | orchestrator | 19:14:56.953 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.955354 | orchestrator | 19:14:56.953 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.955365 | orchestrator | 19:14:56.953 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.955369 | orchestrator | 19:14:56.953 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.955373 | orchestrator | 19:14:56.954 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.955377 | orchestrator | 19:14:56.954 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.955381 | orchestrator | 19:14:56.954 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.955385 | orchestrator | 19:14:56.954 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.955389 | orchestrator | 19:14:56.954 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.955393 | orchestrator | 19:14:56.954 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.969999 | orchestrator | 19:14:56.954 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.970036 | orchestrator | 19:14:56.969 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.970194 | orchestrator | 19:14:56.969 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.970256 | orchestrator | 19:14:56.970 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.970278 | orchestrator | 19:14:56.970 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.970289 | orchestrator | 19:14:56.970 STDOUT terraform:  } 2025-07-12 19:14:56.970302 | orchestrator | 19:14:56.970 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.970362 | orchestrator | 19:14:56.970 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.970379 | orchestrator | 19:14:56.970 STDOUT terraform:  } 2025-07-12 19:14:56.970423 | orchestrator | 19:14:56.970 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.970439 | orchestrator | 19:14:56.970 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.970504 | orchestrator | 19:14:56.970 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-12 19:14:56.970554 | orchestrator | 19:14:56.970 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.970570 | orchestrator | 19:14:56.970 STDOUT terraform:  } 2025-07-12 19:14:56.970583 | orchestrator | 19:14:56.970 STDOUT terraform:  } 2025-07-12 19:14:56.970703 | orchestrator | 19:14:56.970 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-12 19:14:56.970805 | orchestrator | 19:14:56.970 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:14:56.970899 | orchestrator | 19:14:56.970 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.971000 | orchestrator | 19:14:56.970 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.971058 | orchestrator | 19:14:56.970 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.971137 | orchestrator | 19:14:56.971 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.971207 | orchestrator | 19:14:56.971 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.971280 | orchestrator | 19:14:56.971 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.971360 | orchestrator | 19:14:56.971 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.971431 | orchestrator | 19:14:56.971 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.971503 | orchestrator | 19:14:56.971 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.971585 | orchestrator | 19:14:56.971 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.971651 | orchestrator | 19:14:56.971 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.971743 | orchestrator | 19:14:56.971 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.971848 | orchestrator | 19:14:56.971 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.971917 | orchestrator | 19:14:56.971 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.972003 | orchestrator | 19:14:56.971 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.972054 | orchestrator | 19:14:56.971 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.972091 | orchestrator | 19:14:56.972 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.972239 | orchestrator | 19:14:56.972 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.972255 | orchestrator | 19:14:56.972 STDOUT terraform:  } 2025-07-12 19:14:56.972277 | orchestrator | 19:14:56.972 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.973351 | orchestrator | 19:14:56.972 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:14:56.973379 | orchestrator | 19:14:56.972 STDOUT terraform:  } 2025-07-12 19:14:56.973391 | orchestrator | 19:14:56.972 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.973402 | orchestrator | 19:14:56.972 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.973413 | orchestrator | 19:14:56.972 STDOUT terraform:  } 2025-07-12 19:14:56.973424 | orchestrator | 19:14:56.972 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.973435 | orchestrator | 19:14:56.972 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:14:56.973446 | orchestrator | 19:14:56.972 STDOUT terraform:  } 2025-07-12 19:14:56.973457 | orchestrator | 19:14:56.972 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.973468 | orchestrator | 19:14:56.972 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.973479 | orchestrator | 19:14:56.972 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-12 19:14:56.973490 | orchestrator | 19:14:56.972 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.973501 | orchestrator | 19:14:56.972 STDOUT terraform:  } 2025-07-12 19:14:56.973512 | orchestrator | 19:14:56.972 STDOUT terraform:  } 2025-07-12 19:14:56.973523 | orchestrator | 19:14:56.972 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-12 19:14:56.973534 | orchestrator | 19:14:56.972 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:14:56.973559 | orchestrator | 19:14:56.972 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.973570 | orchestrator | 19:14:56.973 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.973581 | orchestrator | 19:14:56.973 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.973600 | orchestrator | 19:14:56.973 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.973612 | orchestrator | 19:14:56.973 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.973623 | orchestrator | 19:14:56.973 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.973639 | orchestrator | 19:14:56.973 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.973650 | orchestrator | 19:14:56.973 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.973660 | orchestrator | 19:14:56.973 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.973671 | orchestrator | 19:14:56.973 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.978087 | orchestrator | 19:14:56.973 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.978135 | orchestrator | 19:14:56.973 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.978144 | orchestrator | 19:14:56.973 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.978152 | orchestrator | 19:14:56.973 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.978160 | orchestrator | 19:14:56.973 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.978168 | orchestrator | 19:14:56.974 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.978175 | orchestrator | 19:14:56.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978183 | orchestrator | 19:14:56.974 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.978192 | orchestrator | 19:14:56.974 STDOUT terraform:  } 2025-07-12 19:14:56.978200 | orchestrator | 19:14:56.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978208 | orchestrator | 19:14:56.974 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:14:56.978216 | orchestrator | 19:14:56.974 STDOUT terraform:  } 2025-07-12 19:14:56.978224 | orchestrator | 19:14:56.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978232 | orchestrator | 19:14:56.974 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.978240 | orchestrator | 19:14:56.974 STDOUT terraform:  } 2025-07-12 19:14:56.978247 | orchestrator | 19:14:56.974 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978255 | orchestrator | 19:14:56.974 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:14:56.978263 | orchestrator | 19:14:56.974 STDOUT terraform:  } 2025-07-12 19:14:56.978271 | orchestrator | 19:14:56.974 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.978279 | orchestrator | 19:14:56.974 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.978305 | orchestrator | 19:14:56.974 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-12 19:14:56.978313 | orchestrator | 19:14:56.974 STDOUT terraform:  + su 2025-07-12 19:14:56.978321 | orchestrator | 19:14:56.974 STDOUT terraform: bnet_id = (known after apply) 2025-07-12 19:14:56.978329 | orchestrator | 19:14:56.974 STDOUT terraform:  } 2025-07-12 19:14:56.978337 | orchestrator | 19:14:56.974 STDOUT terraform:  } 2025-07-12 19:14:56.978345 | orchestrator | 19:14:56.974 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-12 19:14:56.978353 | orchestrator | 19:14:56.974 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:14:56.978361 | orchestrator | 19:14:56.974 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.978369 | orchestrator | 19:14:56.975 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.978377 | orchestrator | 19:14:56.975 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.978384 | orchestrator | 19:14:56.975 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.978392 | orchestrator | 19:14:56.975 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.978400 | orchestrator | 19:14:56.975 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.978408 | orchestrator | 19:14:56.975 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.978416 | orchestrator | 19:14:56.975 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.978423 | orchestrator | 19:14:56.975 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.978431 | orchestrator | 19:14:56.975 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.978439 | orchestrator | 19:14:56.975 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.978447 | orchestrator | 19:14:56.975 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.978465 | orchestrator | 19:14:56.975 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.978473 | orchestrator | 19:14:56.975 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.978481 | orchestrator | 19:14:56.975 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.978489 | orchestrator | 19:14:56.976 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.978497 | orchestrator | 19:14:56.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978505 | orchestrator | 19:14:56.976 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.978512 | orchestrator | 19:14:56.976 STDOUT terraform:  } 2025-07-12 19:14:56.978520 | orchestrator | 19:14:56.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978528 | orchestrator | 19:14:56.976 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:14:56.978536 | orchestrator | 19:14:56.976 STDOUT terraform:  } 2025-07-12 19:14:56.978544 | orchestrator | 19:14:56.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978557 | orchestrator | 19:14:56.976 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.978565 | orchestrator | 19:14:56.976 STDOUT terraform:  } 2025-07-12 19:14:56.978573 | orchestrator | 19:14:56.976 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.978581 | orchestrator | 19:14:56.976 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:14:56.978589 | orchestrator | 19:14:56.976 STDOUT terraform:  } 2025-07-12 19:14:56.978597 | orchestrator | 19:14:56.976 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.978605 | orchestrator | 19:14:56.976 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.978612 | orchestrator | 19:14:56.976 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-12 19:14:56.978621 | orchestrator | 19:14:56.976 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.978628 | orchestrator | 19:14:56.976 STDOUT terraform:  } 2025-07-12 19:14:56.978636 | orchestrator | 19:14:56.976 STDOUT terraform:  } 2025-07-12 19:14:56.978644 | orchestrator | 19:14:56.976 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-12 19:14:56.978653 | orchestrator | 19:14:56.976 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:14:56.978660 | orchestrator | 19:14:56.976 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.978668 | orchestrator | 19:14:56.976 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.978676 | orchestrator | 19:14:56.976 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.978684 | orchestrator | 19:14:56.977 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.978692 | orchestrator | 19:14:56.977 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.978700 | orchestrator | 19:14:56.977 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.978714 | orchestrator | 19:14:56.977 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.978722 | orchestrator | 19:14:56.977 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.978733 | orchestrator | 19:14:56.977 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.978741 | orchestrator | 19:14:56.977 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.978749 | orchestrator | 19:14:56.977 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.978756 | orchestrator | 19:14:56.977 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.978764 | orchestrator | 19:14:56.977 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.978772 | orchestrator | 19:14:56.977 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.978784 | orchestrator | 19:14:56.977 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.978792 | orchestrator | 19:14:56.977 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.978800 | orchestrator | 19:14:56.977 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982306 | orchestrator | 19:14:56.977 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.982340 | orchestrator | 19:14:56.980 STDOUT terraform:  } 2025-07-12 19:14:56.982350 | orchestrator | 19:14:56.980 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982358 | orchestrator | 19:14:56.980 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:14:56.982365 | orchestrator | 19:14:56.980 STDOUT terraform:  } 2025-07-12 19:14:56.982371 | orchestrator | 19:14:56.980 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982378 | orchestrator | 19:14:56.980 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.982385 | orchestrator | 19:14:56.981 STDOUT terraform:  } 2025-07-12 19:14:56.982392 | orchestrator | 19:14:56.981 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982398 | orchestrator | 19:14:56.981 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:14:56.982405 | orchestrator | 19:14:56.981 STDOUT terraform:  } 2025-07-12 19:14:56.982412 | orchestrator | 19:14:56.981 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.982419 | orchestrator | 19:14:56.981 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.982426 | orchestrator | 19:14:56.981 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-12 19:14:56.982433 | orchestrator | 19:14:56.981 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.982440 | orchestrator | 19:14:56.981 STDOUT terraform:  } 2025-07-12 19:14:56.982447 | orchestrator | 19:14:56.981 STDOUT terraform:  } 2025-07-12 19:14:56.982454 | orchestrator | 19:14:56.981 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-12 19:14:56.982461 | orchestrator | 19:14:56.981 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:14:56.982473 | orchestrator | 19:14:56.981 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.982480 | orchestrator | 19:14:56.981 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.982487 | orchestrator | 19:14:56.981 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.982494 | orchestrator | 19:14:56.981 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.982500 | orchestrator | 19:14:56.981 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.982507 | orchestrator | 19:14:56.981 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.982514 | orchestrator | 19:14:56.981 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.982520 | orchestrator | 19:14:56.981 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.982527 | orchestrator | 19:14:56.981 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.982534 | orchestrator | 19:14:56.981 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.982548 | orchestrator | 19:14:56.982 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.982555 | orchestrator | 19:14:56.982 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.982571 | orchestrator | 19:14:56.982 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.982578 | orchestrator | 19:14:56.982 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.982590 | orchestrator | 19:14:56.982 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.982597 | orchestrator | 19:14:56.982 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.982604 | orchestrator | 19:14:56.982 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982611 | orchestrator | 19:14:56.982 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.982617 | orchestrator | 19:14:56.982 STDOUT terraform:  } 2025-07-12 19:14:56.982624 | orchestrator | 19:14:56.982 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982631 | orchestrator | 19:14:56.982 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:14:56.982638 | orchestrator | 19:14:56.982 STDOUT terraform:  } 2025-07-12 19:14:56.982644 | orchestrator | 19:14:56.982 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.982653 | orchestrator | 19:14:56.982 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.982660 | orchestrator | 19:14:56.982 STDOUT terraform:  } 2025-07-12 19:14:56.982667 | orchestrator | 19:14:56.982 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.985216 | orchestrator | 19:14:56.982 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:14:56.985244 | orchestrator | 19:14:56.982 STDOUT terraform:  } 2025-07-12 19:14:56.985251 | orchestrator | 19:14:56.982 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.985258 | orchestrator | 19:14:56.982 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.985265 | orchestrator | 19:14:56.982 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-12 19:14:56.985271 | orchestrator | 19:14:56.982 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.985278 | orchestrator | 19:14:56.982 STDOUT terraform:  } 2025-07-12 19:14:56.985284 | orchestrator | 19:14:56.982 STDOUT terraform:  } 2025-07-12 19:14:56.985291 | orchestrator | 19:14:56.982 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-12 19:14:56.985297 | orchestrator | 19:14:56.982 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:14:56.985303 | orchestrator | 19:14:56.982 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.985309 | orchestrator | 19:14:56.983 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:14:56.985315 | orchestrator | 19:14:56.983 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:14:56.985325 | orchestrator | 19:14:56.983 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.985331 | orchestrator | 19:14:56.983 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:14:56.985337 | orchestrator | 19:14:56.983 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:14:56.985343 | orchestrator | 19:14:56.983 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:14:56.985361 | orchestrator | 19:14:56.983 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:14:56.987835 | orchestrator | 19:14:56.983 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.987863 | orchestrator | 19:14:56.985 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:14:56.987869 | orchestrator | 19:14:56.985 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:56.987875 | orchestrator | 19:14:56.985 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:14:56.987880 | orchestrator | 19:14:56.985 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:14:56.987893 | orchestrator | 19:14:56.985 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.987899 | orchestrator | 19:14:56.985 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:14:56.987905 | orchestrator | 19:14:56.985 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.987910 | orchestrator | 19:14:56.985 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.987916 | orchestrator | 19:14:56.985 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:14:56.987922 | orchestrator | 19:14:56.985 STDOUT terraform:  } 2025-07-12 19:14:56.987927 | orchestrator | 19:14:56.985 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.987933 | orchestrator | 19:14:56.985 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:14:56.987938 | orchestrator | 19:14:56.986 STDOUT terraform:  } 2025-07-12 19:14:56.987944 | orchestrator | 19:14:56.986 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.987949 | orchestrator | 19:14:56.986 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:14:56.987955 | orchestrator | 19:14:56.986 STDOUT terraform:  } 2025-07-12 19:14:56.987960 | orchestrator | 19:14:56.986 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:14:56.987965 | orchestrator | 19:14:56.986 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:14:56.987987 | orchestrator | 19:14:56.986 STDOUT terraform:  } 2025-07-12 19:14:56.987993 | orchestrator | 19:14:56.986 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:14:56.987998 | orchestrator | 19:14:56.986 STDOUT terraform:  + fixed_ip { 2025-07-12 19:14:56.988004 | orchestrator | 19:14:56.986 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-12 19:14:56.988010 | orchestrator | 19:14:56.986 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.988015 | orchestrator | 19:14:56.986 STDOUT terraform:  } 2025-07-12 19:14:56.988021 | orchestrator | 19:14:56.986 STDOUT terraform:  } 2025-07-12 19:14:56.988026 | orchestrator | 19:14:56.986 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-12 19:14:56.988032 | orchestrator | 19:14:56.986 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-12 19:14:56.988038 | orchestrator | 19:14:56.986 STDOUT terraform:  + force_destroy = false 2025-07-12 19:14:56.988043 | orchestrator | 19:14:56.986 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.988060 | orchestrator | 19:14:56.986 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 19:14:56.988066 | orchestrator | 19:14:56.986 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.988071 | orchestrator | 19:14:56.986 STDOUT terraform:  + router_id = (known after apply) 2025-07-12 19:14:56.988077 | orchestrator | 19:14:56.986 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:14:56.988083 | orchestrator | 19:14:56.986 STDOUT terraform:  } 2025-07-12 19:14:56.988088 | orchestrator | 19:14:56.986 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-12 19:14:56.988094 | orchestrator | 19:14:56.986 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-12 19:14:56.988099 | orchestrator | 19:14:56.986 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:14:56.988105 | orchestrator | 19:14:56.986 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:56.988119 | orchestrator | 19:14:56.986 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 19:14:56.988125 | orchestrator | 19:14:56.987 STDOUT terraform:  + "nova", 2025-07-12 19:14:56.988131 | orchestrator | 19:14:56.987 STDOUT terraform:  ] 2025-07-12 19:14:56.988136 | orchestrator | 19:14:56.987 STDOUT terraform:  + distributed = (known after apply) 2025-07-12 19:14:56.988142 | orchestrator | 19:14:56.987 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-12 19:14:56.988148 | orchestrator | 19:14:56.987 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-12 19:14:56.988157 | orchestrator | 19:14:56.987 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-12 19:14:56.988163 | orchestrator | 19:14:56.987 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.988168 | orchestrator | 19:14:56.987 STDOUT terraform:  + name = "testbed" 2025-07-12 19:14:56.988174 | orchestrator | 19:14:56.987 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.988179 | orchestrator | 19:14:56.987 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.988185 | orchestrator | 19:14:56.987 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-12 19:14:56.988190 | orchestrator | 19:14:56.987 STDOUT terraform:  } 2025-07-12 19:14:56.988196 | orchestrator | 19:14:56.987 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-12 19:14:56.988202 | orchestrator | 19:14:56.987 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-12 19:14:56.988208 | orchestrator | 19:14:56.987 STDOUT terraform:  + description = "ssh" 2025-07-12 19:14:56.988213 | orchestrator | 19:14:56.987 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.988219 | orchestrator | 19:14:56.987 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.988224 | orchestrator | 19:14:56.987 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.988230 | orchestrator | 19:14:56.988 STDOUT terraform:  + port_range_max = 22 2025-07-12 19:14:56.988239 | orchestrator | 19:14:56.988 STDOUT terraform:  + port_range_min = 22 2025-07-12 19:14:56.988245 | orchestrator | 19:14:56.988 STDOUT terraform:  + protocol = "tcp" 2025-07-12 19:14:56.988252 | orchestrator | 19:14:56.988 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.988258 | orchestrator | 19:14:56.988 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.988769 | orchestrator | 19:14:56.988 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.988788 | orchestrator | 19:14:56.988 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:56.988793 | orchestrator | 19:14:56.988 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.988799 | orchestrator | 19:14:56.988 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.988805 | orchestrator | 19:14:56.988 STDOUT terraform:  } 2025-07-12 19:14:56.988810 | orchestrator | 19:14:56.988 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-12 19:14:56.988816 | orchestrator | 19:14:56.988 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-12 19:14:56.988821 | orchestrator | 19:14:56.988 STDOUT terraform:  + description = "wireguard" 2025-07-12 19:14:56.988827 | orchestrator | 19:14:56.988 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.988832 | orchestrator | 19:14:56.988 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.988841 | orchestrator | 19:14:56.988 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.988847 | orchestrator | 19:14:56.988 STDOUT terraform:  + port_range_max = 51820 2025-07-12 19:14:56.989090 | orchestrator | 19:14:56.988 STDOUT terraform:  + port_range_min = 51820 2025-07-12 19:14:56.989103 | orchestrator | 19:14:56.988 STDOUT terraform:  + protocol = "udp" 2025-07-12 19:14:56.989108 | orchestrator | 19:14:56.988 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.989114 | orchestrator | 19:14:56.988 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.989122 | orchestrator | 19:14:56.989 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.989262 | orchestrator | 19:14:56.989 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:56.989278 | orchestrator | 19:14:56.989 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.989286 | orchestrator | 19:14:56.989 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.989293 | orchestrator | 19:14:56.989 STDOUT terraform:  } 2025-07-12 19:14:56.989437 | orchestrator | 19:14:56.989 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-12 19:14:56.989602 | orchestrator | 19:14:56.989 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-12 19:14:56.989614 | orchestrator | 19:14:56.989 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.989620 | orchestrator | 19:14:56.989 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.989634 | orchestrator | 19:14:56.989 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.989643 | orchestrator | 19:14:56.989 STDOUT terraform:  + protocol = "tcp" 2025-07-12 19:14:56.989703 | orchestrator | 19:14:56.989 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.989757 | orchestrator | 19:14:56.989 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.989819 | orchestrator | 19:14:56.989 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.989874 | orchestrator | 19:14:56.989 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 19:14:56.989937 | orchestrator | 19:14:56.989 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.990113 | orchestrator | 19:14:56.989 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.990123 | orchestrator | 19:14:56.990 STDOUT terraform:  } 2025-07-12 19:14:56.990213 | orchestrator | 19:14:56.990 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-12 19:14:56.990301 | orchestrator | 19:14:56.990 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-12 19:14:56.990358 | orchestrator | 19:14:56.990 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.990394 | orchestrator | 19:14:56.990 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.990450 | orchestrator | 19:14:56.990 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.990487 | orchestrator | 19:14:56.990 STDOUT terraform:  + protocol = "udp" 2025-07-12 19:14:56.990550 | orchestrator | 19:14:56.990 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.990604 | orchestrator | 19:14:56.990 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.990668 | orchestrator | 19:14:56.990 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.990714 | orchestrator | 19:14:56.990 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 19:14:56.990768 | orchestrator | 19:14:56.990 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.998046 | orchestrator | 19:14:56.990 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.998069 | orchestrator | 19:14:56.995 STDOUT terraform:  } 2025-07-12 19:14:56.998076 | orchestrator | 19:14:56.995 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-12 19:14:56.998080 | orchestrator | 19:14:56.995 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-12 19:14:56.998085 | orchestrator | 19:14:56.995 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.998089 | orchestrator | 19:14:56.995 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.998094 | orchestrator | 19:14:56.995 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.998098 | orchestrator | 19:14:56.995 STDOUT terraform:  + protocol = "icmp" 2025-07-12 19:14:56.998113 | orchestrator | 19:14:56.995 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.998118 | orchestrator | 19:14:56.995 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.998123 | orchestrator | 19:14:56.995 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.998127 | orchestrator | 19:14:56.995 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:56.998131 | orchestrator | 19:14:56.996 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.998135 | orchestrator | 19:14:56.996 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.998139 | orchestrator | 19:14:56.996 STDOUT terraform:  } 2025-07-12 19:14:56.998144 | orchestrator | 19:14:56.996 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-12 19:14:56.998148 | orchestrator | 19:14:56.996 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-12 19:14:56.998152 | orchestrator | 19:14:56.996 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.998156 | orchestrator | 19:14:56.996 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.998161 | orchestrator | 19:14:56.996 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.998165 | orchestrator | 19:14:56.996 STDOUT terraform:  + protocol = "tcp" 2025-07-12 19:14:56.998169 | orchestrator | 19:14:56.996 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.998173 | orchestrator | 19:14:56.996 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.998177 | orchestrator | 19:14:56.996 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.998182 | orchestrator | 19:14:56.996 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:56.998186 | orchestrator | 19:14:56.996 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.998190 | orchestrator | 19:14:56.996 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.998194 | orchestrator | 19:14:56.997 STDOUT terraform:  } 2025-07-12 19:14:56.998198 | orchestrator | 19:14:56.997 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-12 19:14:56.998203 | orchestrator | 19:14:56.997 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-12 19:14:56.998207 | orchestrator | 19:14:56.997 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.998211 | orchestrator | 19:14:56.997 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.998215 | orchestrator | 19:14:56.997 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.998219 | orchestrator | 19:14:56.997 STDOUT terraform:  + protocol = "udp" 2025-07-12 19:14:56.998224 | orchestrator | 19:14:56.997 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.998233 | orchestrator | 19:14:56.997 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.998240 | orchestrator | 19:14:56.997 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:56.998244 | orchestrator | 19:14:56.997 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:56.998248 | orchestrator | 19:14:56.997 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:56.998253 | orchestrator | 19:14:56.997 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:56.998257 | orchestrator | 19:14:56.997 STDOUT terraform:  } 2025-07-12 19:14:56.998261 | orchestrator | 19:14:56.997 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-12 19:14:56.998265 | orchestrator | 19:14:56.997 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-12 19:14:56.998269 | orchestrator | 19:14:56.997 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:56.998274 | orchestrator | 19:14:56.997 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:56.998278 | orchestrator | 19:14:56.997 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:56.998282 | orchestrator | 19:14:56.997 STDOUT terraform:  + protocol = "icmp" 2025-07-12 19:14:56.998286 | orchestrator | 19:14:56.997 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:56.998290 | orchestrator | 19:14:56.998 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:56.998296 | orchestrator | 19:14:56.998 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:57.000150 | orchestrator | 19:14:56.998 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:57.000167 | orchestrator | 19:14:56.998 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:57.000171 | orchestrator | 19:14:56.998 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:57.000176 | orchestrator | 19:14:56.998 STDOUT terraform:  } 2025-07-12 19:14:57.000180 | orchestrator | 19:14:56.998 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-12 19:14:57.000184 | orchestrator | 19:14:56.998 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-12 19:14:57.000189 | orchestrator | 19:14:56.998 STDOUT terraform:  + description = "vrrp" 2025-07-12 19:14:57.000193 | orchestrator | 19:14:56.998 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:14:57.000197 | orchestrator | 19:14:56.998 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:14:57.000229 | orchestrator | 19:14:56.998 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:57.000234 | orchestrator | 19:14:56.998 STDOUT terraform:  + protocol = "112" 2025-07-12 19:14:57.000238 | orchestrator | 19:14:56.999 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:57.000243 | orchestrator | 19:14:56.999 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:14:57.000247 | orchestrator | 19:14:56.999 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:14:57.000251 | orchestrator | 19:14:56.999 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:14:57.000261 | orchestrator | 19:14:56.999 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:14:57.000265 | orchestrator | 19:14:56.999 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:57.000269 | orchestrator | 19:14:56.999 STDOUT terraform:  } 2025-07-12 19:14:57.000274 | orchestrator | 19:14:56.999 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-12 19:14:57.000281 | orchestrator | 19:14:57.000 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-12 19:14:57.000286 | orchestrator | 19:14:57.000 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:57.000324 | orchestrator | 19:14:57.000 STDOUT terraform:  + description = "management security group" 2025-07-12 19:14:57.000401 | orchestrator | 19:14:57.000 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:57.000477 | orchestrator | 19:14:57.000 STDOUT terraform:  + name = "testbed-management" 2025-07-12 19:14:57.000552 | orchestrator | 19:14:57.000 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:57.000671 | orchestrator | 19:14:57.000 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 19:14:57.000789 | orchestrator | 19:14:57.000 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:57.000848 | orchestrator | 19:14:57.000 STDOUT terraform:  } 2025-07-12 19:14:57.001012 | orchestrator | 19:14:57.000 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-12 19:14:57.001154 | orchestrator | 19:14:57.001 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-12 19:14:57.001236 | orchestrator | 19:14:57.001 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:57.001304 | orchestrator | 19:14:57.001 STDOUT terraform:  + description = "node security group" 2025-07-12 19:14:57.001380 | orchestrator | 19:14:57.001 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:57.001445 | orchestrator | 19:14:57.001 STDOUT terraform:  + name = "testbed-node" 2025-07-12 19:14:57.001524 | orchestrator | 19:14:57.001 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:57.001600 | orchestrator | 19:14:57.001 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 19:14:57.001673 | orchestrator | 19:14:57.001 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:57.001705 | orchestrator | 19:14:57.001 STDOUT terraform:  } 2025-07-12 19:14:57.001827 | orchestrator | 19:14:57.001 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-12 19:14:57.001945 | orchestrator | 19:14:57.001 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-12 19:14:57.002213 | orchestrator | 19:14:57.001 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:14:57.002222 | orchestrator | 19:14:57.002 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-12 19:14:57.004123 | orchestrator | 19:14:57.002 STDOUT terraform:  + dns_nameservers = [ 2025-07-12 19:14:57.004136 | orchestrator | 19:14:57.002 STDOUT terraform:  + "8.8.8.8", 2025-07-12 19:14:57.004147 | orchestrator | 19:14:57.002 STDOUT terraform:  + "9.9.9.9", 2025-07-12 19:14:57.004152 | orchestrator | 19:14:57.002 STDOUT terraform:  ] 2025-07-12 19:14:57.004156 | orchestrator | 19:14:57.002 STDOUT terraform:  + enable_dhcp = true 2025-07-12 19:14:57.004160 | orchestrator | 19:14:57.002 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-12 19:14:57.004164 | orchestrator | 19:14:57.002 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:57.004168 | orchestrator | 19:14:57.002 STDOUT terraform:  + ip_version = 4 2025-07-12 19:14:57.004172 | orchestrator | 19:14:57.002 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-12 19:14:57.004176 | orchestrator | 19:14:57.002 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-12 19:14:57.004179 | orchestrator | 19:14:57.002 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-12 19:14:57.004183 | orchestrator | 19:14:57.002 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:14:57.004187 | orchestrator | 19:14:57.002 STDOUT terraform:  + no_gateway = false 2025-07-12 19:14:57.004191 | orchestrator | 19:14:57.002 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:14:57.004194 | orchestrator | 19:14:57.003 STDOUT terraform:  + service_types = (known after apply) 2025-07-12 19:14:57.004198 | orchestrator | 19:14:57.003 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:14:57.004202 | orchestrator | 19:14:57.003 STDOUT terraform:  + allocation_pool { 2025-07-12 19:14:57.004206 | orchestrator | 19:14:57.003 STDOUT terraform:  + end = "192.168.31.250" 2025-07-12 19:14:57.004209 | orchestrator | 19:14:57.003 STDOUT terraform:  + start = "192.168.31.200" 2025-07-12 19:14:57.004213 | orchestrator | 19:14:57.003 STDOUT terraform:  } 2025-07-12 19:14:57.004217 | orchestrator | 19:14:57.003 STDOUT terraform:  } 2025-07-12 19:14:57.004221 | orchestrator | 19:14:57.003 STDOUT terraform:  # terraform_data.image will be created 2025-07-12 19:14:57.004225 | orchestrator | 19:14:57.003 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-12 19:14:57.004229 | orchestrator | 19:14:57.003 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:57.004232 | orchestrator | 19:14:57.003 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 19:14:57.004236 | orchestrator | 19:14:57.003 STDOUT terraform:  + output = (known after apply) 2025-07-12 19:14:57.004240 | orchestrator | 19:14:57.003 STDOUT terraform:  } 2025-07-12 19:14:57.004246 | orchestrator | 19:14:57.003 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-12 19:14:57.004250 | orchestrator | 19:14:57.003 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-12 19:14:57.004254 | orchestrator | 19:14:57.003 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:14:57.004257 | orchestrator | 19:14:57.003 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-12 19:14:57.004261 | orchestrator | 19:14:57.003 STDOUT terraform:  + output = (known after apply) 2025-07-12 19:14:57.004265 | orchestrator | 19:14:57.003 STDOUT terraform:  } 2025-07-12 19:14:57.004269 | orchestrator | 19:14:57.004 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-12 19:14:57.004275 | orchestrator | 19:14:57.004 STDOUT terraform: Changes to Outputs: 2025-07-12 19:14:57.004281 | orchestrator | 19:14:57.004 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-12 19:14:57.004285 | orchestrator | 19:14:57.004 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 19:14:57.164275 | orchestrator | 19:14:57.164 STDOUT terraform: terraform_data.image: Creating... 2025-07-12 19:14:57.165122 | orchestrator | 19:14:57.165 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-12 19:14:57.165874 | orchestrator | 19:14:57.165 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=5698e35b-b675-4fed-5dbf-a05c5af18f6b] 2025-07-12 19:14:57.167467 | orchestrator | 19:14:57.167 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=291ef66d-8969-7d10-b210-45c619ba7a49] 2025-07-12 19:14:57.183407 | orchestrator | 19:14:57.183 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-12 19:14:57.187769 | orchestrator | 19:14:57.187 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-12 19:14:57.189454 | orchestrator | 19:14:57.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-12 19:14:57.190412 | orchestrator | 19:14:57.190 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-12 19:14:57.191821 | orchestrator | 19:14:57.191 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-12 19:14:57.193124 | orchestrator | 19:14:57.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-12 19:14:57.194789 | orchestrator | 19:14:57.194 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-12 19:14:57.196477 | orchestrator | 19:14:57.196 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-12 19:14:57.198516 | orchestrator | 19:14:57.198 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-12 19:14:57.204499 | orchestrator | 19:14:57.204 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-12 19:14:57.659687 | orchestrator | 19:14:57.658 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-12 19:14:57.661687 | orchestrator | 19:14:57.661 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-07-12 19:14:57.668108 | orchestrator | 19:14:57.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-12 19:14:57.671398 | orchestrator | 19:14:57.671 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-12 19:14:57.697858 | orchestrator | 19:14:57.697 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-07-12 19:14:57.706050 | orchestrator | 19:14:57.705 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-12 19:14:58.179556 | orchestrator | 19:14:58.179 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=57c22842-02ba-4c9f-8244-9e7cb273b9b2] 2025-07-12 19:14:58.190102 | orchestrator | 19:14:58.189 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-12 19:15:00.835875 | orchestrator | 19:15:00.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8] 2025-07-12 19:15:00.842900 | orchestrator | 19:15:00.842 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-12 19:15:00.842988 | orchestrator | 19:15:00.842 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=5dd700c9-bc5e-4428-837a-aadccc164418] 2025-07-12 19:15:00.847502 | orchestrator | 19:15:00.847 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=e599fd12-11ff-4888-9095-9cc0b7d1a350] 2025-07-12 19:15:00.848880 | orchestrator | 19:15:00.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-12 19:15:00.863032 | orchestrator | 19:15:00.862 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-12 19:15:00.873843 | orchestrator | 19:15:00.873 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=0a88cf92-9e41-408b-a9d0-3b2da488fdc7] 2025-07-12 19:15:00.876773 | orchestrator | 19:15:00.876 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=0c476a5e-2a4b-4838-9c87-337753775914] 2025-07-12 19:15:00.881197 | orchestrator | 19:15:00.881 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-12 19:15:00.882366 | orchestrator | 19:15:00.882 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-12 19:15:00.894893 | orchestrator | 19:15:00.894 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb] 2025-07-12 19:15:00.904027 | orchestrator | 19:15:00.903 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-12 19:15:00.930291 | orchestrator | 19:15:00.929 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=da649787-4cd2-466e-b254-be39940a6b94] 2025-07-12 19:15:00.949054 | orchestrator | 19:15:00.948 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=a9737b61-a1af-4e5f-b757-491f643427f9] 2025-07-12 19:15:00.949139 | orchestrator | 19:15:00.948 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-12 19:15:00.958886 | orchestrator | 19:15:00.958 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=2d0b8f3112890324c456db1538757f1afb4ee898] 2025-07-12 19:15:00.969764 | orchestrator | 19:15:00.969 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-12 19:15:00.973417 | orchestrator | 19:15:00.973 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-12 19:15:00.977624 | orchestrator | 19:15:00.977 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=9ff9f9e2d9f32546a3f97889a8f13908f2171e5c] 2025-07-12 19:15:01.011779 | orchestrator | 19:15:01.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=f459a603-bea4-4ea2-b1cd-cecdf48dbc28] 2025-07-12 19:15:01.527713 | orchestrator | 19:15:01.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=82ec485a-b082-4a6f-b189-87a9a1d03f41] 2025-07-12 19:15:01.890673 | orchestrator | 19:15:01.890 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=221471dd-054f-436a-9fe0-c3c9dbfc88e5] 2025-07-12 19:15:01.899095 | orchestrator | 19:15:01.898 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-12 19:15:04.214203 | orchestrator | 19:15:04.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d95ed02d-de93-4ced-b5a0-253568193ec9] 2025-07-12 19:15:04.263082 | orchestrator | 19:15:04.262 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=607076d3-244d-457e-a6e9-84d454d62909] 2025-07-12 19:15:04.275108 | orchestrator | 19:15:04.274 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e] 2025-07-12 19:15:04.312764 | orchestrator | 19:15:04.312 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=8452b2df-bb2b-4d77-accc-09da11bea63b] 2025-07-12 19:15:04.324933 | orchestrator | 19:15:04.324 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=8365c504-c177-40d4-a7fd-588ef9dda518] 2025-07-12 19:15:04.331190 | orchestrator | 19:15:04.330 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=e4d3d755-d51e-4e4d-b58b-320c6a01a06b] 2025-07-12 19:15:04.960732 | orchestrator | 19:15:04.960 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=9f83d42f-a225-44d3-85bf-7b5662ea845f] 2025-07-12 19:15:04.967808 | orchestrator | 19:15:04.967 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-12 19:15:04.968295 | orchestrator | 19:15:04.968 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-12 19:15:04.969203 | orchestrator | 19:15:04.969 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-12 19:15:05.213162 | orchestrator | 19:15:05.212 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=286c1079-683e-4842-a66b-9b84b5766050] 2025-07-12 19:15:05.213725 | orchestrator | 19:15:05.213 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=7c492293-982e-416c-b081-00daf946f8a9] 2025-07-12 19:15:05.226455 | orchestrator | 19:15:05.226 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-12 19:15:05.228476 | orchestrator | 19:15:05.228 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-12 19:15:05.231368 | orchestrator | 19:15:05.231 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-12 19:15:05.233899 | orchestrator | 19:15:05.233 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-12 19:15:05.234405 | orchestrator | 19:15:05.234 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-12 19:15:05.234822 | orchestrator | 19:15:05.234 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-12 19:15:05.240734 | orchestrator | 19:15:05.240 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-12 19:15:05.244481 | orchestrator | 19:15:05.244 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-12 19:15:05.248114 | orchestrator | 19:15:05.247 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-12 19:15:05.421723 | orchestrator | 19:15:05.421 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=91cea7ec-be40-41de-a5e7-566a65829697] 2025-07-12 19:15:05.429574 | orchestrator | 19:15:05.429 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-12 19:15:05.566337 | orchestrator | 19:15:05.565 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=22612a11-dae0-4e0a-8df9-a452f78cc261] 2025-07-12 19:15:05.580768 | orchestrator | 19:15:05.580 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-12 19:15:05.600268 | orchestrator | 19:15:05.599 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ab9ed252-5d53-48fd-8b42-ed6248a46a27] 2025-07-12 19:15:05.621950 | orchestrator | 19:15:05.621 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-12 19:15:05.774288 | orchestrator | 19:15:05.773 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=dc1b071c-ce75-4dc8-a9d4-71689c54db81] 2025-07-12 19:15:05.789522 | orchestrator | 19:15:05.789 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-12 19:15:05.965413 | orchestrator | 19:15:05.965 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=97d4b21d-be2f-459a-a19f-f6cc4c6ca96a] 2025-07-12 19:15:05.980678 | orchestrator | 19:15:05.980 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-12 19:15:06.001758 | orchestrator | 19:15:06.001 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=312d43db-4783-48ee-acf9-fec85a0ced69] 2025-07-12 19:15:06.014732 | orchestrator | 19:15:06.014 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-12 19:15:06.026336 | orchestrator | 19:15:06.025 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=5c8dea61-0bff-420e-b1d3-12479ef33cdb] 2025-07-12 19:15:06.036847 | orchestrator | 19:15:06.036 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-12 19:15:06.145317 | orchestrator | 19:15:06.144 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=55f6b778-f9ff-43de-bfaf-4476a51faf18] 2025-07-12 19:15:06.172846 | orchestrator | 19:15:06.172 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=4a161f50-c87f-4ba4-ae55-74c9e42a73b8] 2025-07-12 19:15:06.271455 | orchestrator | 19:15:06.271 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=25ac216b-f317-4db4-b305-c519d2fa364d] 2025-07-12 19:15:06.283433 | orchestrator | 19:15:06.283 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=117c3f66-8966-45c6-86bf-122944f3e21b] 2025-07-12 19:15:06.335631 | orchestrator | 19:15:06.335 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=a622fed2-6550-451d-9eb0-f5098aeb8580] 2025-07-12 19:15:06.554500 | orchestrator | 19:15:06.553 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=f9552312-ef03-44f8-93ef-963add1124cf] 2025-07-12 19:15:06.604156 | orchestrator | 19:15:06.603 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=01776f6e-eb52-4d29-8f22-51e0331b8573] 2025-07-12 19:15:06.824645 | orchestrator | 19:15:06.824 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=d0e55e98-42f1-49b3-8c9e-a128e51c9642] 2025-07-12 19:15:07.329477 | orchestrator | 19:15:07.329 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=44311b46-b612-48dd-96bc-1da96dd3882d] 2025-07-12 19:15:09.136920 | orchestrator | 19:15:09.135 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=de2c509b-23d4-4ab1-a454-e0a269ac447a] 2025-07-12 19:15:09.159065 | orchestrator | 19:15:09.158 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-12 19:15:09.170355 | orchestrator | 19:15:09.170 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-12 19:15:09.175185 | orchestrator | 19:15:09.175 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-12 19:15:09.175293 | orchestrator | 19:15:09.175 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-12 19:15:09.176695 | orchestrator | 19:15:09.176 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-12 19:15:09.185722 | orchestrator | 19:15:09.185 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-12 19:15:09.187789 | orchestrator | 19:15:09.187 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-12 19:15:10.668386 | orchestrator | 19:15:10.667 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=577eabb3-f4e9-432b-b0bb-74b609e4d50b] 2025-07-12 19:15:10.676702 | orchestrator | 19:15:10.676 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-12 19:15:10.682788 | orchestrator | 19:15:10.682 STDOUT terraform: local_file.inventory: Creating... 2025-07-12 19:15:10.685176 | orchestrator | 19:15:10.685 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-12 19:15:10.688668 | orchestrator | 19:15:10.688 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=177ba9d57ab62075c6ad3b968129aaab9894543c] 2025-07-12 19:15:10.691946 | orchestrator | 19:15:10.691 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=fc16b24d29d3dce1b5d3b8d965486aae66462f87] 2025-07-12 19:15:11.396511 | orchestrator | 19:15:11.396 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=577eabb3-f4e9-432b-b0bb-74b609e4d50b] 2025-07-12 19:15:19.171945 | orchestrator | 19:15:19.171 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-12 19:15:19.180200 | orchestrator | 19:15:19.179 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-12 19:15:19.181573 | orchestrator | 19:15:19.180 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-12 19:15:19.181665 | orchestrator | 19:15:19.180 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-12 19:15:19.187889 | orchestrator | 19:15:19.187 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-12 19:15:19.190243 | orchestrator | 19:15:19.190 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-12 19:15:29.173305 | orchestrator | 19:15:29.172 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-12 19:15:29.181345 | orchestrator | 19:15:29.181 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-12 19:15:29.181428 | orchestrator | 19:15:29.181 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-12 19:15:29.181642 | orchestrator | 19:15:29.181 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-12 19:15:29.188617 | orchestrator | 19:15:29.188 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-12 19:15:29.190912 | orchestrator | 19:15:29.190 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-12 19:15:39.174640 | orchestrator | 19:15:39.174 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-07-12 19:15:39.182365 | orchestrator | 19:15:39.181 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-12 19:15:39.182457 | orchestrator | 19:15:39.182 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-12 19:15:39.182473 | orchestrator | 19:15:39.182 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-12 19:15:39.189531 | orchestrator | 19:15:39.189 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-12 19:15:39.191819 | orchestrator | 19:15:39.191 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-12 19:15:39.823252 | orchestrator | 19:15:39.822 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=d1ab7133-7d63-4aee-91e6-b3d3fc580c25] 2025-07-12 19:15:39.891916 | orchestrator | 19:15:39.891 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=66c4c633-1a1b-43b1-b423-ae642fabc945] 2025-07-12 19:15:40.007853 | orchestrator | 19:15:40.007 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=7aaed8ba-3545-45c6-8dfe-9ebbc31d0e60] 2025-07-12 19:15:40.108730 | orchestrator | 19:15:40.108 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=223b5165-703c-4e4c-b3fe-1c12a90223e7] 2025-07-12 19:15:40.320979 | orchestrator | 19:15:40.320 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=fe35d918-858f-4dcf-ab6b-774c10625542] 2025-07-12 19:15:49.182763 | orchestrator | 19:15:49.182 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-07-12 19:15:50.012370 | orchestrator | 19:15:50.011 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=7719b277-1336-41ba-8298-c8f829804764] 2025-07-12 19:15:50.047383 | orchestrator | 19:15:50.047 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-12 19:15:50.050736 | orchestrator | 19:15:50.050 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-12 19:15:50.050901 | orchestrator | 19:15:50.050 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-12 19:15:50.052451 | orchestrator | 19:15:50.052 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-12 19:15:50.054418 | orchestrator | 19:15:50.054 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-12 19:15:50.055994 | orchestrator | 19:15:50.055 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-12 19:15:50.061836 | orchestrator | 19:15:50.061 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3946345059502241322] 2025-07-12 19:15:50.063673 | orchestrator | 19:15:50.063 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-12 19:15:50.068854 | orchestrator | 19:15:50.068 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-12 19:15:50.070078 | orchestrator | 19:15:50.069 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-12 19:15:50.070342 | orchestrator | 19:15:50.070 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-12 19:15:50.089815 | orchestrator | 19:15:50.089 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-12 19:15:53.443675 | orchestrator | 19:15:53.443 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=66c4c633-1a1b-43b1-b423-ae642fabc945/0c476a5e-2a4b-4838-9c87-337753775914] 2025-07-12 19:15:53.478418 | orchestrator | 19:15:53.477 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=d1ab7133-7d63-4aee-91e6-b3d3fc580c25/e599fd12-11ff-4888-9095-9cc0b7d1a350] 2025-07-12 19:15:53.488365 | orchestrator | 19:15:53.487 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=66c4c633-1a1b-43b1-b423-ae642fabc945/f459a603-bea4-4ea2-b1cd-cecdf48dbc28] 2025-07-12 19:15:53.505423 | orchestrator | 19:15:53.505 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 4s [id=223b5165-703c-4e4c-b3fe-1c12a90223e7/da649787-4cd2-466e-b254-be39940a6b94] 2025-07-12 19:15:53.520731 | orchestrator | 19:15:53.520 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 4s [id=d1ab7133-7d63-4aee-91e6-b3d3fc580c25/b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb] 2025-07-12 19:15:53.929494 | orchestrator | 19:15:53.929 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=223b5165-703c-4e4c-b3fe-1c12a90223e7/5dd700c9-bc5e-4428-837a-aadccc164418] 2025-07-12 19:15:59.591349 | orchestrator | 19:15:59.590 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 10s [id=223b5165-703c-4e4c-b3fe-1c12a90223e7/a9737b61-a1af-4e5f-b757-491f643427f9] 2025-07-12 19:15:59.592842 | orchestrator | 19:15:59.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=66c4c633-1a1b-43b1-b423-ae642fabc945/5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8] 2025-07-12 19:15:59.619687 | orchestrator | 19:15:59.619 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 10s [id=d1ab7133-7d63-4aee-91e6-b3d3fc580c25/0a88cf92-9e41-408b-a9d0-3b2da488fdc7] 2025-07-12 19:16:00.090798 | orchestrator | 19:16:00.090 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-12 19:16:10.092098 | orchestrator | 19:16:10.091 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-12 19:16:10.506397 | orchestrator | 19:16:10.506 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=c3e2701b-f6c4-4466-b28d-d4682765f892] 2025-07-12 19:16:10.547227 | orchestrator | 19:16:10.546 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-12 19:16:10.547431 | orchestrator | 19:16:10.547 STDOUT terraform: Outputs: 2025-07-12 19:16:10.547669 | orchestrator | 19:16:10.547 STDOUT terraform: manager_address = 2025-07-12 19:16:10.547689 | orchestrator | 19:16:10.547 STDOUT terraform: private_key = 2025-07-12 19:16:10.644977 | orchestrator | ok: Runtime: 0:01:22.023792 2025-07-12 19:16:10.675946 | 2025-07-12 19:16:10.676146 | TASK [Create infrastructure (stable)] 2025-07-12 19:16:11.220401 | orchestrator | skipping: Conditional result was False 2025-07-12 19:16:11.238171 | 2025-07-12 19:16:11.238348 | TASK [Fetch manager address] 2025-07-12 19:16:11.674161 | orchestrator | ok 2025-07-12 19:16:11.688190 | 2025-07-12 19:16:11.688343 | TASK [Set manager_host address] 2025-07-12 19:16:11.767758 | orchestrator | ok 2025-07-12 19:16:11.779085 | 2025-07-12 19:16:11.779231 | LOOP [Update ansible collections] 2025-07-12 19:16:14.028527 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:16:14.028853 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 19:16:14.028914 | orchestrator | Starting galaxy collection install process 2025-07-12 19:16:14.028958 | orchestrator | Process install dependency map 2025-07-12 19:16:14.028997 | orchestrator | Starting collection install process 2025-07-12 19:16:14.029033 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-07-12 19:16:14.029091 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-07-12 19:16:14.029134 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-12 19:16:14.029213 | orchestrator | ok: Item: commons Runtime: 0:00:01.916084 2025-07-12 19:16:15.251349 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:16:15.251624 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 19:16:15.252168 | orchestrator | Starting galaxy collection install process 2025-07-12 19:16:15.252219 | orchestrator | Process install dependency map 2025-07-12 19:16:15.252256 | orchestrator | Starting collection install process 2025-07-12 19:16:15.252289 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-07-12 19:16:15.252322 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-07-12 19:16:15.252353 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-12 19:16:15.252400 | orchestrator | ok: Item: services Runtime: 0:00:00.956082 2025-07-12 19:16:15.279810 | 2025-07-12 19:16:15.279938 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 19:16:25.766378 | orchestrator | ok 2025-07-12 19:16:25.774293 | 2025-07-12 19:16:25.774413 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 19:17:25.829897 | orchestrator | ok 2025-07-12 19:17:25.839623 | 2025-07-12 19:17:25.839741 | TASK [Fetch manager ssh hostkey] 2025-07-12 19:17:27.408934 | orchestrator | Output suppressed because no_log was given 2025-07-12 19:17:27.423914 | 2025-07-12 19:17:27.424132 | TASK [Get ssh keypair from terraform environment] 2025-07-12 19:17:27.960803 | orchestrator | ok: Runtime: 0:00:00.012278 2025-07-12 19:17:27.977196 | 2025-07-12 19:17:27.977366 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 19:17:28.012668 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-12 19:17:28.021832 | 2025-07-12 19:17:28.021986 | TASK [Run manager part 0] 2025-07-12 19:17:29.067420 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:17:29.178978 | orchestrator | 2025-07-12 19:17:29.179038 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-12 19:17:29.179047 | orchestrator | 2025-07-12 19:17:29.179061 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-12 19:17:31.011657 | orchestrator | ok: [testbed-manager] 2025-07-12 19:17:31.011741 | orchestrator | 2025-07-12 19:17:31.011773 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 19:17:31.011788 | orchestrator | 2025-07-12 19:17:31.011802 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:17:32.907426 | orchestrator | ok: [testbed-manager] 2025-07-12 19:17:32.907486 | orchestrator | 2025-07-12 19:17:32.907494 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 19:17:33.590922 | orchestrator | ok: [testbed-manager] 2025-07-12 19:17:33.590991 | orchestrator | 2025-07-12 19:17:33.591005 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 19:17:33.646270 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.646376 | orchestrator | 2025-07-12 19:17:33.646388 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-12 19:17:33.674098 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.674124 | orchestrator | 2025-07-12 19:17:33.674131 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 19:17:33.707990 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.708014 | orchestrator | 2025-07-12 19:17:33.708019 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 19:17:33.742846 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.742876 | orchestrator | 2025-07-12 19:17:33.742880 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 19:17:33.767847 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.767866 | orchestrator | 2025-07-12 19:17:33.767871 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-12 19:17:33.793114 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.793137 | orchestrator | 2025-07-12 19:17:33.793144 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-12 19:17:33.825441 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:17:33.825456 | orchestrator | 2025-07-12 19:17:33.825461 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-12 19:17:34.593866 | orchestrator | changed: [testbed-manager] 2025-07-12 19:17:34.593931 | orchestrator | 2025-07-12 19:17:34.593940 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-12 19:19:42.601809 | orchestrator | changed: [testbed-manager] 2025-07-12 19:19:42.601979 | orchestrator | 2025-07-12 19:19:42.601998 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 19:21:04.730934 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:04.731629 | orchestrator | 2025-07-12 19:21:04.731663 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 19:21:27.270738 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:27.270834 | orchestrator | 2025-07-12 19:21:27.270849 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 19:21:35.754121 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:35.754196 | orchestrator | 2025-07-12 19:21:35.754208 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 19:21:35.792752 | orchestrator | ok: [testbed-manager] 2025-07-12 19:21:35.792862 | orchestrator | 2025-07-12 19:21:35.792881 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-12 19:21:36.529519 | orchestrator | ok: [testbed-manager] 2025-07-12 19:21:36.529556 | orchestrator | 2025-07-12 19:21:36.529563 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-12 19:21:37.296640 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:37.296680 | orchestrator | 2025-07-12 19:21:37.296688 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-12 19:21:43.292373 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:43.292452 | orchestrator | 2025-07-12 19:21:43.292489 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-12 19:21:49.121493 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:49.121541 | orchestrator | 2025-07-12 19:21:49.121551 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-12 19:21:51.722289 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:51.722396 | orchestrator | 2025-07-12 19:21:51.722419 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-12 19:21:53.450175 | orchestrator | changed: [testbed-manager] 2025-07-12 19:21:53.450262 | orchestrator | 2025-07-12 19:21:53.450279 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-12 19:21:54.594918 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 19:21:54.595466 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 19:21:54.595483 | orchestrator | 2025-07-12 19:21:54.595492 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-12 19:21:54.638307 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 19:21:54.638383 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 19:21:54.638397 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 19:21:54.638410 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 19:22:00.390500 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 19:22:00.390545 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 19:22:00.390551 | orchestrator | 2025-07-12 19:22:00.390556 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-12 19:22:00.943854 | orchestrator | changed: [testbed-manager] 2025-07-12 19:22:00.943916 | orchestrator | 2025-07-12 19:22:00.943931 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-12 19:23:32.413396 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-12 19:23:32.413468 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-12 19:23:32.413484 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-12 19:23:32.413497 | orchestrator | 2025-07-12 19:23:32.413510 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-12 19:23:34.677910 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-12 19:23:34.678008 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-12 19:23:34.678109 | orchestrator | 2025-07-12 19:23:34.678134 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-12 19:23:34.678157 | orchestrator | 2025-07-12 19:23:34.678177 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:23:36.082146 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:36.082223 | orchestrator | 2025-07-12 19:23:36.082234 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 19:23:36.135078 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:36.135171 | orchestrator | 2025-07-12 19:23:36.135187 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 19:23:36.205707 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:36.205789 | orchestrator | 2025-07-12 19:23:36.205838 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 19:23:37.007182 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:37.007274 | orchestrator | 2025-07-12 19:23:37.007289 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 19:23:37.707616 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:37.707699 | orchestrator | 2025-07-12 19:23:37.707713 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 19:23:39.050860 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-12 19:23:39.050958 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-12 19:23:39.050982 | orchestrator | 2025-07-12 19:23:39.051021 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 19:23:40.443222 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:40.443331 | orchestrator | 2025-07-12 19:23:40.443348 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 19:23:42.144785 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:23:42.144848 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-12 19:23:42.144855 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:23:42.144861 | orchestrator | 2025-07-12 19:23:42.144868 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 19:23:42.201287 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:42.201375 | orchestrator | 2025-07-12 19:23:42.201386 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 19:23:42.750301 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:42.750341 | orchestrator | 2025-07-12 19:23:42.750351 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 19:23:42.821870 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:42.821914 | orchestrator | 2025-07-12 19:23:42.821922 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 19:23:43.719289 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:23:43.719330 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:43.719339 | orchestrator | 2025-07-12 19:23:43.719346 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 19:23:43.753100 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:43.753141 | orchestrator | 2025-07-12 19:23:43.753150 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 19:23:43.787494 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:43.787533 | orchestrator | 2025-07-12 19:23:43.787542 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 19:23:43.823932 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:43.823970 | orchestrator | 2025-07-12 19:23:43.823978 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 19:23:43.876153 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:43.876193 | orchestrator | 2025-07-12 19:23:43.876202 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 19:23:44.613455 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:44.613553 | orchestrator | 2025-07-12 19:23:44.613573 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 19:23:44.613592 | orchestrator | 2025-07-12 19:23:44.613613 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:23:46.058933 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:46.058968 | orchestrator | 2025-07-12 19:23:46.058974 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-12 19:23:47.023132 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:47.023785 | orchestrator | 2025-07-12 19:23:47.023861 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:23:47.023885 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-12 19:23:47.023906 | orchestrator | 2025-07-12 19:23:47.277967 | orchestrator | ok: Runtime: 0:06:18.798844 2025-07-12 19:23:47.296036 | 2025-07-12 19:23:47.296177 | TASK [Point out that the log in on the manager is now possible] 2025-07-12 19:23:47.332142 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-12 19:23:47.342484 | 2025-07-12 19:23:47.342606 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 19:23:47.372901 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-12 19:23:47.380351 | 2025-07-12 19:23:47.380470 | TASK [Run manager part 1 + 2] 2025-07-12 19:23:48.229647 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:23:48.292649 | orchestrator | 2025-07-12 19:23:48.292697 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-12 19:23:48.292704 | orchestrator | 2025-07-12 19:23:48.292717 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:23:51.230387 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:51.230434 | orchestrator | 2025-07-12 19:23:51.230455 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 19:23:51.265334 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:51.265377 | orchestrator | 2025-07-12 19:23:51.265386 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 19:23:51.310912 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:51.310961 | orchestrator | 2025-07-12 19:23:51.310970 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 19:23:51.350653 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:51.350707 | orchestrator | 2025-07-12 19:23:51.350717 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 19:23:51.429440 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:51.429496 | orchestrator | 2025-07-12 19:23:51.429507 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 19:23:51.489792 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:51.489886 | orchestrator | 2025-07-12 19:23:51.489904 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 19:23:51.535143 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-12 19:23:51.535230 | orchestrator | 2025-07-12 19:23:51.535249 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 19:23:52.257505 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:52.258139 | orchestrator | 2025-07-12 19:23:52.258171 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 19:23:52.303936 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:23:52.304029 | orchestrator | 2025-07-12 19:23:52.304047 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 19:23:53.646859 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:53.646952 | orchestrator | 2025-07-12 19:23:53.646971 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 19:23:54.243492 | orchestrator | ok: [testbed-manager] 2025-07-12 19:23:54.243581 | orchestrator | 2025-07-12 19:23:54.243597 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 19:23:55.405666 | orchestrator | changed: [testbed-manager] 2025-07-12 19:23:55.405735 | orchestrator | 2025-07-12 19:23:55.405753 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 19:24:10.158756 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:10.158861 | orchestrator | 2025-07-12 19:24:10.158878 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 19:24:10.826882 | orchestrator | ok: [testbed-manager] 2025-07-12 19:24:10.827086 | orchestrator | 2025-07-12 19:24:10.827108 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 19:24:10.880122 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:24:10.880204 | orchestrator | 2025-07-12 19:24:10.880220 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-12 19:24:11.839290 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:11.839373 | orchestrator | 2025-07-12 19:24:11.839390 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-12 19:24:12.815088 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:12.815173 | orchestrator | 2025-07-12 19:24:12.815189 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-12 19:24:13.381946 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:13.382072 | orchestrator | 2025-07-12 19:24:13.382093 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-12 19:24:13.421060 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 19:24:13.421161 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 19:24:13.421176 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 19:24:13.421188 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 19:24:16.329448 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:16.329549 | orchestrator | 2025-07-12 19:24:16.329567 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-12 19:24:24.747521 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-12 19:24:24.747597 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-12 19:24:24.747614 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-12 19:24:24.747626 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-12 19:24:24.747644 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-12 19:24:24.747655 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-12 19:24:24.747666 | orchestrator | 2025-07-12 19:24:24.747678 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-12 19:24:25.735857 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:25.735942 | orchestrator | 2025-07-12 19:24:25.735955 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-12 19:24:25.777579 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:24:25.777642 | orchestrator | 2025-07-12 19:24:25.777654 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-12 19:24:28.894297 | orchestrator | changed: [testbed-manager] 2025-07-12 19:24:28.894385 | orchestrator | 2025-07-12 19:24:28.894401 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-12 19:24:28.937249 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:24:28.937314 | orchestrator | 2025-07-12 19:24:28.937327 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-12 19:26:01.498043 | orchestrator | changed: [testbed-manager] 2025-07-12 19:26:01.498082 | orchestrator | 2025-07-12 19:26:01.498090 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 19:26:02.625278 | orchestrator | ok: [testbed-manager] 2025-07-12 19:26:02.625315 | orchestrator | 2025-07-12 19:26:02.625322 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:26:02.625329 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-12 19:26:02.625335 | orchestrator | 2025-07-12 19:26:02.988822 | orchestrator | ok: Runtime: 0:02:15.032723 2025-07-12 19:26:03.004171 | 2025-07-12 19:26:03.004301 | TASK [Reboot manager] 2025-07-12 19:26:05.041353 | orchestrator | ok: Runtime: 0:00:00.961032 2025-07-12 19:26:05.058386 | 2025-07-12 19:26:05.058560 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 19:26:19.513049 | orchestrator | ok 2025-07-12 19:26:19.523931 | 2025-07-12 19:26:19.524080 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 19:27:19.569829 | orchestrator | ok 2025-07-12 19:27:19.579350 | 2025-07-12 19:27:19.579471 | TASK [Deploy manager + bootstrap nodes] 2025-07-12 19:27:22.158711 | orchestrator | 2025-07-12 19:27:22.158926 | orchestrator | # DEPLOY MANAGER 2025-07-12 19:27:22.158955 | orchestrator | 2025-07-12 19:27:22.158970 | orchestrator | + set -e 2025-07-12 19:27:22.158984 | orchestrator | + echo 2025-07-12 19:27:22.158998 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-12 19:27:22.159016 | orchestrator | + echo 2025-07-12 19:27:22.159064 | orchestrator | + cat /opt/manager-vars.sh 2025-07-12 19:27:22.162148 | orchestrator | export NUMBER_OF_NODES=6 2025-07-12 19:27:22.162236 | orchestrator | 2025-07-12 19:27:22.162254 | orchestrator | export CEPH_VERSION=reef 2025-07-12 19:27:22.162268 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-12 19:27:22.162281 | orchestrator | export MANAGER_VERSION=latest 2025-07-12 19:27:22.162310 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-12 19:27:22.162322 | orchestrator | 2025-07-12 19:27:22.162340 | orchestrator | export ARA=false 2025-07-12 19:27:22.162352 | orchestrator | export DEPLOY_MODE=manager 2025-07-12 19:27:22.162370 | orchestrator | export TEMPEST=false 2025-07-12 19:27:22.162382 | orchestrator | export IS_ZUUL=true 2025-07-12 19:27:22.162393 | orchestrator | 2025-07-12 19:27:22.162411 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:27:22.162423 | orchestrator | export EXTERNAL_API=false 2025-07-12 19:27:22.162434 | orchestrator | 2025-07-12 19:27:22.162445 | orchestrator | export IMAGE_USER=ubuntu 2025-07-12 19:27:22.162459 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-12 19:27:22.162470 | orchestrator | 2025-07-12 19:27:22.162481 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-12 19:27:22.162503 | orchestrator | 2025-07-12 19:27:22.162515 | orchestrator | + echo 2025-07-12 19:27:22.162527 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 19:27:22.162942 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 19:27:22.162964 | orchestrator | ++ INTERACTIVE=false 2025-07-12 19:27:22.162976 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 19:27:22.162989 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 19:27:22.163240 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 19:27:22.163283 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 19:27:22.163296 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 19:27:22.163355 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 19:27:22.163367 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 19:27:22.163380 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 19:27:22.163400 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 19:27:22.163411 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 19:27:22.163422 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 19:27:22.163433 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 19:27:22.163454 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 19:27:22.163465 | orchestrator | ++ export ARA=false 2025-07-12 19:27:22.163476 | orchestrator | ++ ARA=false 2025-07-12 19:27:22.163487 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 19:27:22.163498 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 19:27:22.163508 | orchestrator | ++ export TEMPEST=false 2025-07-12 19:27:22.163520 | orchestrator | ++ TEMPEST=false 2025-07-12 19:27:22.163530 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 19:27:22.163541 | orchestrator | ++ IS_ZUUL=true 2025-07-12 19:27:22.163552 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:27:22.163563 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:27:22.163578 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 19:27:22.163589 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 19:27:22.163600 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 19:27:22.163610 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 19:27:22.163621 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 19:27:22.163632 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 19:27:22.163644 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 19:27:22.163654 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 19:27:22.163665 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-12 19:27:22.225170 | orchestrator | + docker version 2025-07-12 19:27:22.463920 | orchestrator | Client: Docker Engine - Community 2025-07-12 19:27:22.464007 | orchestrator | Version: 27.5.1 2025-07-12 19:27:22.464016 | orchestrator | API version: 1.47 2025-07-12 19:27:22.464020 | orchestrator | Go version: go1.22.11 2025-07-12 19:27:22.464024 | orchestrator | Git commit: 9f9e405 2025-07-12 19:27:22.464028 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 19:27:22.464034 | orchestrator | OS/Arch: linux/amd64 2025-07-12 19:27:22.464038 | orchestrator | Context: default 2025-07-12 19:27:22.464042 | orchestrator | 2025-07-12 19:27:22.464046 | orchestrator | Server: Docker Engine - Community 2025-07-12 19:27:22.464050 | orchestrator | Engine: 2025-07-12 19:27:22.464055 | orchestrator | Version: 27.5.1 2025-07-12 19:27:22.464059 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-12 19:27:22.464081 | orchestrator | Go version: go1.22.11 2025-07-12 19:27:22.464085 | orchestrator | Git commit: 4c9b3b0 2025-07-12 19:27:22.464089 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 19:27:22.464093 | orchestrator | OS/Arch: linux/amd64 2025-07-12 19:27:22.464097 | orchestrator | Experimental: false 2025-07-12 19:27:22.464100 | orchestrator | containerd: 2025-07-12 19:27:22.464112 | orchestrator | Version: 1.7.27 2025-07-12 19:27:22.464116 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-12 19:27:22.464120 | orchestrator | runc: 2025-07-12 19:27:22.464124 | orchestrator | Version: 1.2.5 2025-07-12 19:27:22.464128 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-12 19:27:22.464132 | orchestrator | docker-init: 2025-07-12 19:27:22.464136 | orchestrator | Version: 0.19.0 2025-07-12 19:27:22.464140 | orchestrator | GitCommit: de40ad0 2025-07-12 19:27:22.468793 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-12 19:27:22.478698 | orchestrator | + set -e 2025-07-12 19:27:22.478727 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 19:27:22.478733 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 19:27:22.478737 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 19:27:22.478741 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 19:27:22.478745 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 19:27:22.478749 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 19:27:22.478754 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 19:27:22.478758 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 19:27:22.478763 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 19:27:22.478767 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 19:27:22.478771 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 19:27:22.478775 | orchestrator | ++ export ARA=false 2025-07-12 19:27:22.478779 | orchestrator | ++ ARA=false 2025-07-12 19:27:22.478783 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 19:27:22.478787 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 19:27:22.478791 | orchestrator | ++ export TEMPEST=false 2025-07-12 19:27:22.478795 | orchestrator | ++ TEMPEST=false 2025-07-12 19:27:22.478798 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 19:27:22.478802 | orchestrator | ++ IS_ZUUL=true 2025-07-12 19:27:22.478806 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:27:22.478810 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:27:22.478814 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 19:27:22.478818 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 19:27:22.478852 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 19:27:22.478857 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 19:27:22.478861 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 19:27:22.478865 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 19:27:22.478869 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 19:27:22.478873 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 19:27:22.478877 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 19:27:22.478880 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 19:27:22.478885 | orchestrator | ++ INTERACTIVE=false 2025-07-12 19:27:22.478889 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 19:27:22.478895 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 19:27:22.478962 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 19:27:22.478968 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 19:27:22.478972 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-12 19:27:22.486216 | orchestrator | + set -e 2025-07-12 19:27:22.486243 | orchestrator | + VERSION=reef 2025-07-12 19:27:22.487796 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:27:22.493450 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-12 19:27:22.493489 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:27:22.499158 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-12 19:27:22.505967 | orchestrator | + set -e 2025-07-12 19:27:22.505992 | orchestrator | + VERSION=2024.2 2025-07-12 19:27:22.507108 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:27:22.511650 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-12 19:27:22.511677 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:27:22.516886 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-12 19:27:22.518069 | orchestrator | ++ semver latest 7.0.0 2025-07-12 19:27:22.574647 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 19:27:22.574725 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 19:27:22.574739 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-12 19:27:22.574751 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-12 19:27:22.667537 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 19:27:22.669960 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 19:27:22.671112 | orchestrator | ++ deactivate nondestructive 2025-07-12 19:27:22.671138 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:27:22.671150 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:27:22.671162 | orchestrator | ++ hash -r 2025-07-12 19:27:22.671179 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:27:22.671215 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 19:27:22.671228 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 19:27:22.671249 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 19:27:22.671265 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 19:27:22.671288 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 19:27:22.671312 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 19:27:22.671324 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 19:27:22.671339 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:27:22.671388 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:27:22.671425 | orchestrator | ++ export PATH 2025-07-12 19:27:22.671496 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:27:22.671625 | orchestrator | ++ '[' -z '' ']' 2025-07-12 19:27:22.671640 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 19:27:22.671651 | orchestrator | ++ PS1='(venv) ' 2025-07-12 19:27:22.671662 | orchestrator | ++ export PS1 2025-07-12 19:27:22.671677 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 19:27:22.671688 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 19:27:22.671700 | orchestrator | ++ hash -r 2025-07-12 19:27:22.671815 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-12 19:27:23.882805 | orchestrator | 2025-07-12 19:27:23.882930 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-12 19:27:23.882945 | orchestrator | 2025-07-12 19:27:23.882955 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 19:27:24.458349 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:24.458448 | orchestrator | 2025-07-12 19:27:24.458464 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 19:27:25.432534 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:25.432640 | orchestrator | 2025-07-12 19:27:25.432657 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-12 19:27:25.432670 | orchestrator | 2025-07-12 19:27:25.432682 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:27:27.782435 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:27.782542 | orchestrator | 2025-07-12 19:27:27.782559 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-12 19:27:27.836316 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:27.836397 | orchestrator | 2025-07-12 19:27:27.836414 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-12 19:27:28.290778 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:28.290912 | orchestrator | 2025-07-12 19:27:28.290936 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-12 19:27:28.336006 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:27:28.336087 | orchestrator | 2025-07-12 19:27:28.336099 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 19:27:28.662329 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:28.662433 | orchestrator | 2025-07-12 19:27:28.662451 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-12 19:27:28.727701 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:27:28.727807 | orchestrator | 2025-07-12 19:27:28.727823 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-12 19:27:29.067434 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:29.067536 | orchestrator | 2025-07-12 19:27:29.067551 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-12 19:27:29.188570 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:27:29.188692 | orchestrator | 2025-07-12 19:27:29.188711 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-12 19:27:29.188726 | orchestrator | 2025-07-12 19:27:29.188740 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:27:30.986357 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:30.986477 | orchestrator | 2025-07-12 19:27:30.986503 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-12 19:27:31.079703 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-12 19:27:31.079794 | orchestrator | 2025-07-12 19:27:31.079809 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-12 19:27:31.135062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-12 19:27:31.135155 | orchestrator | 2025-07-12 19:27:31.135169 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-12 19:27:32.207888 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-12 19:27:32.207959 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-12 19:27:32.207965 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-12 19:27:32.207970 | orchestrator | 2025-07-12 19:27:32.207975 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-12 19:27:34.026547 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-12 19:27:34.026654 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-12 19:27:34.026672 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-12 19:27:34.026685 | orchestrator | 2025-07-12 19:27:34.026697 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-12 19:27:34.703402 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:27:34.703510 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:34.703526 | orchestrator | 2025-07-12 19:27:34.703538 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-12 19:27:35.376404 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:27:35.376523 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:35.376542 | orchestrator | 2025-07-12 19:27:35.376556 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-12 19:27:35.431157 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:27:35.431253 | orchestrator | 2025-07-12 19:27:35.431268 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-12 19:27:35.781223 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:35.781332 | orchestrator | 2025-07-12 19:27:35.781354 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-12 19:27:35.854374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-12 19:27:35.854480 | orchestrator | 2025-07-12 19:27:35.854501 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-12 19:27:36.923540 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:36.923642 | orchestrator | 2025-07-12 19:27:36.923658 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-12 19:27:37.713296 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:37.713397 | orchestrator | 2025-07-12 19:27:37.713413 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-12 19:27:49.045701 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:49.045809 | orchestrator | 2025-07-12 19:27:49.045827 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-12 19:27:49.093043 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:27:49.093141 | orchestrator | 2025-07-12 19:27:49.093156 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-12 19:27:49.093167 | orchestrator | 2025-07-12 19:27:49.093176 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:27:50.866941 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:50.867050 | orchestrator | 2025-07-12 19:27:50.867096 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-12 19:27:50.973971 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-12 19:27:50.974110 | orchestrator | 2025-07-12 19:27:50.974124 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-12 19:27:51.029195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:27:51.029298 | orchestrator | 2025-07-12 19:27:51.029314 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-12 19:27:53.563095 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:53.563308 | orchestrator | 2025-07-12 19:27:53.563334 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-12 19:27:53.611119 | orchestrator | ok: [testbed-manager] 2025-07-12 19:27:53.611200 | orchestrator | 2025-07-12 19:27:53.611211 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-12 19:27:53.776382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-12 19:27:53.776482 | orchestrator | 2025-07-12 19:27:53.776497 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-12 19:27:56.649936 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-12 19:27:56.650143 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-12 19:27:56.651104 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-12 19:27:56.651191 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-12 19:27:56.651205 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-12 19:27:56.651217 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-12 19:27:56.651228 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-12 19:27:56.651240 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-12 19:27:56.651252 | orchestrator | 2025-07-12 19:27:56.651265 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-12 19:27:57.263319 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:57.263443 | orchestrator | 2025-07-12 19:27:57.263464 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-12 19:27:57.902273 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:57.902378 | orchestrator | 2025-07-12 19:27:57.902395 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-12 19:27:57.983312 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-12 19:27:57.983413 | orchestrator | 2025-07-12 19:27:57.983427 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-12 19:27:59.170344 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-12 19:27:59.170452 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-12 19:27:59.170468 | orchestrator | 2025-07-12 19:27:59.170482 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-12 19:27:59.793264 | orchestrator | changed: [testbed-manager] 2025-07-12 19:27:59.793370 | orchestrator | 2025-07-12 19:27:59.793386 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-12 19:27:59.857579 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:27:59.857672 | orchestrator | 2025-07-12 19:27:59.857687 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-12 19:27:59.920118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-12 19:27:59.920210 | orchestrator | 2025-07-12 19:27:59.920225 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-12 19:28:01.255604 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:28:01.255717 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:28:01.255734 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:01.255749 | orchestrator | 2025-07-12 19:28:01.255762 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-12 19:28:01.890258 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:01.890387 | orchestrator | 2025-07-12 19:28:01.890405 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-12 19:28:01.938348 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:28:01.938438 | orchestrator | 2025-07-12 19:28:01.938453 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-12 19:28:02.026328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-12 19:28:02.026420 | orchestrator | 2025-07-12 19:28:02.026435 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-12 19:28:02.534479 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:02.534582 | orchestrator | 2025-07-12 19:28:02.534600 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-12 19:28:02.931705 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:02.931829 | orchestrator | 2025-07-12 19:28:02.931848 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-12 19:28:04.129311 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-12 19:28:04.129416 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-12 19:28:04.129430 | orchestrator | 2025-07-12 19:28:04.129441 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-12 19:28:04.743051 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:04.743155 | orchestrator | 2025-07-12 19:28:04.743172 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-12 19:28:05.131252 | orchestrator | ok: [testbed-manager] 2025-07-12 19:28:05.131358 | orchestrator | 2025-07-12 19:28:05.131375 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-12 19:28:05.495280 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:05.495390 | orchestrator | 2025-07-12 19:28:05.495409 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-12 19:28:05.532172 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:28:05.532255 | orchestrator | 2025-07-12 19:28:05.532264 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-12 19:28:05.591505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-12 19:28:05.591610 | orchestrator | 2025-07-12 19:28:05.591626 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-12 19:28:05.622184 | orchestrator | ok: [testbed-manager] 2025-07-12 19:28:05.622280 | orchestrator | 2025-07-12 19:28:05.622302 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-12 19:28:07.572406 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-12 19:28:07.572532 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-12 19:28:07.572559 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-12 19:28:07.572571 | orchestrator | 2025-07-12 19:28:07.572584 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-12 19:28:08.268396 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:08.268500 | orchestrator | 2025-07-12 19:28:08.268519 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-12 19:28:08.957493 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:08.957590 | orchestrator | 2025-07-12 19:28:08.957605 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-12 19:28:09.677278 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:09.677385 | orchestrator | 2025-07-12 19:28:09.677403 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-12 19:28:09.745662 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-12 19:28:09.745728 | orchestrator | 2025-07-12 19:28:09.745741 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-12 19:28:09.787091 | orchestrator | ok: [testbed-manager] 2025-07-12 19:28:09.787199 | orchestrator | 2025-07-12 19:28:09.787212 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-12 19:28:10.464177 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-12 19:28:10.464279 | orchestrator | 2025-07-12 19:28:10.464294 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-12 19:28:10.542091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-12 19:28:10.542178 | orchestrator | 2025-07-12 19:28:10.542192 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-12 19:28:11.241410 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:11.241531 | orchestrator | 2025-07-12 19:28:11.241548 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-12 19:28:11.831349 | orchestrator | ok: [testbed-manager] 2025-07-12 19:28:11.831451 | orchestrator | 2025-07-12 19:28:11.831469 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-12 19:28:11.888532 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:28:11.888627 | orchestrator | 2025-07-12 19:28:11.888643 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-12 19:28:11.943290 | orchestrator | ok: [testbed-manager] 2025-07-12 19:28:11.943363 | orchestrator | 2025-07-12 19:28:11.943371 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-12 19:28:12.736804 | orchestrator | changed: [testbed-manager] 2025-07-12 19:28:12.736977 | orchestrator | 2025-07-12 19:28:12.736997 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-12 19:29:21.102291 | orchestrator | changed: [testbed-manager] 2025-07-12 19:29:21.102412 | orchestrator | 2025-07-12 19:29:21.102431 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-12 19:29:22.094439 | orchestrator | ok: [testbed-manager] 2025-07-12 19:29:22.094544 | orchestrator | 2025-07-12 19:29:22.094561 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-12 19:29:22.145782 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:29:22.145922 | orchestrator | 2025-07-12 19:29:22.145939 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-12 19:29:24.671394 | orchestrator | changed: [testbed-manager] 2025-07-12 19:29:24.671505 | orchestrator | 2025-07-12 19:29:24.671523 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-12 19:29:24.723391 | orchestrator | ok: [testbed-manager] 2025-07-12 19:29:24.723490 | orchestrator | 2025-07-12 19:29:24.723508 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 19:29:24.723521 | orchestrator | 2025-07-12 19:29:24.723533 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-12 19:29:24.781465 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:29:24.781620 | orchestrator | 2025-07-12 19:29:24.781635 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-12 19:30:24.838760 | orchestrator | Pausing for 60 seconds 2025-07-12 19:30:24.838918 | orchestrator | changed: [testbed-manager] 2025-07-12 19:30:24.838938 | orchestrator | 2025-07-12 19:30:24.838953 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-12 19:30:27.997196 | orchestrator | changed: [testbed-manager] 2025-07-12 19:30:27.997307 | orchestrator | 2025-07-12 19:30:27.997324 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-12 19:31:09.729943 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-12 19:31:09.730103 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-12 19:31:09.730121 | orchestrator | changed: [testbed-manager] 2025-07-12 19:31:09.730136 | orchestrator | 2025-07-12 19:31:09.730148 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-12 19:31:19.265692 | orchestrator | changed: [testbed-manager] 2025-07-12 19:31:19.265788 | orchestrator | 2025-07-12 19:31:19.265805 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-12 19:31:19.349123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-12 19:31:19.349239 | orchestrator | 2025-07-12 19:31:19.349256 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 19:31:19.349269 | orchestrator | 2025-07-12 19:31:19.349281 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-12 19:31:19.397177 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:31:19.397258 | orchestrator | 2025-07-12 19:31:19.397272 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:31:19.397285 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-12 19:31:19.397297 | orchestrator | 2025-07-12 19:31:19.477833 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 19:31:19.477956 | orchestrator | + deactivate 2025-07-12 19:31:19.477972 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 19:31:19.477985 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:31:19.477997 | orchestrator | + export PATH 2025-07-12 19:31:19.478008 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 19:31:19.478071 | orchestrator | + '[' -n '' ']' 2025-07-12 19:31:19.478083 | orchestrator | + hash -r 2025-07-12 19:31:19.478093 | orchestrator | + '[' -n '' ']' 2025-07-12 19:31:19.478104 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 19:31:19.478115 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 19:31:19.478126 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 19:31:19.478137 | orchestrator | + unset -f deactivate 2025-07-12 19:31:19.478148 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-12 19:31:19.482626 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 19:31:19.482691 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 19:31:19.482705 | orchestrator | + local max_attempts=60 2025-07-12 19:31:19.482717 | orchestrator | + local name=ceph-ansible 2025-07-12 19:31:19.482729 | orchestrator | + local attempt_num=1 2025-07-12 19:31:19.482879 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:31:19.523376 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:31:19.523427 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 19:31:19.523434 | orchestrator | + local max_attempts=60 2025-07-12 19:31:19.523440 | orchestrator | + local name=kolla-ansible 2025-07-12 19:31:19.523446 | orchestrator | + local attempt_num=1 2025-07-12 19:31:19.523709 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 19:31:19.562638 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:31:19.562724 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 19:31:19.562742 | orchestrator | + local max_attempts=60 2025-07-12 19:31:19.562754 | orchestrator | + local name=osism-ansible 2025-07-12 19:31:19.562765 | orchestrator | + local attempt_num=1 2025-07-12 19:31:19.564048 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 19:31:19.593531 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:31:19.593594 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 19:31:19.593607 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 19:31:20.240960 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-12 19:31:20.462892 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-12 19:31:20.463016 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463031 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463043 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-12 19:31:20.463056 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-12 19:31:20.463135 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463157 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463169 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-07-12 19:31:20.463180 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463332 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-12 19:31:20.463347 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463358 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-12 19:31:20.463369 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463380 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.463391 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-12 19:31:20.468096 | orchestrator | ++ semver latest 7.0.0 2025-07-12 19:31:20.518815 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 19:31:20.518889 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 19:31:20.518940 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-12 19:31:20.523683 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-12 19:31:32.444217 | orchestrator | 2025-07-12 19:31:32 | INFO  | Task 4735e098-f233-46a5-8099-4bdade199751 (resolvconf) was prepared for execution. 2025-07-12 19:31:32.444322 | orchestrator | 2025-07-12 19:31:32 | INFO  | It takes a moment until task 4735e098-f233-46a5-8099-4bdade199751 (resolvconf) has been started and output is visible here. 2025-07-12 19:31:45.467220 | orchestrator | 2025-07-12 19:31:45.467305 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-12 19:31:45.467321 | orchestrator | 2025-07-12 19:31:45.467332 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:31:45.467342 | orchestrator | Saturday 12 July 2025 19:31:36 +0000 (0:00:00.153) 0:00:00.153 ********* 2025-07-12 19:31:45.467353 | orchestrator | ok: [testbed-manager] 2025-07-12 19:31:45.467364 | orchestrator | 2025-07-12 19:31:45.467376 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 19:31:45.467387 | orchestrator | Saturday 12 July 2025 19:31:40 +0000 (0:00:03.678) 0:00:03.831 ********* 2025-07-12 19:31:45.467398 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:31:45.467409 | orchestrator | 2025-07-12 19:31:45.467424 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 19:31:45.467434 | orchestrator | Saturday 12 July 2025 19:31:40 +0000 (0:00:00.065) 0:00:03.896 ********* 2025-07-12 19:31:45.467462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-12 19:31:45.467473 | orchestrator | 2025-07-12 19:31:45.467483 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 19:31:45.467493 | orchestrator | Saturday 12 July 2025 19:31:40 +0000 (0:00:00.085) 0:00:03.982 ********* 2025-07-12 19:31:45.467504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:31:45.467515 | orchestrator | 2025-07-12 19:31:45.467526 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 19:31:45.467536 | orchestrator | Saturday 12 July 2025 19:31:40 +0000 (0:00:00.074) 0:00:04.056 ********* 2025-07-12 19:31:45.467547 | orchestrator | ok: [testbed-manager] 2025-07-12 19:31:45.467557 | orchestrator | 2025-07-12 19:31:45.467567 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 19:31:45.467578 | orchestrator | Saturday 12 July 2025 19:31:41 +0000 (0:00:00.951) 0:00:05.008 ********* 2025-07-12 19:31:45.467588 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:31:45.467599 | orchestrator | 2025-07-12 19:31:45.467609 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 19:31:45.467620 | orchestrator | Saturday 12 July 2025 19:31:41 +0000 (0:00:00.059) 0:00:05.067 ********* 2025-07-12 19:31:45.467630 | orchestrator | ok: [testbed-manager] 2025-07-12 19:31:45.467640 | orchestrator | 2025-07-12 19:31:45.467650 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 19:31:45.467662 | orchestrator | Saturday 12 July 2025 19:31:41 +0000 (0:00:00.440) 0:00:05.508 ********* 2025-07-12 19:31:45.467672 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:31:45.467683 | orchestrator | 2025-07-12 19:31:45.467693 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 19:31:45.467704 | orchestrator | Saturday 12 July 2025 19:31:41 +0000 (0:00:00.081) 0:00:05.590 ********* 2025-07-12 19:31:45.467714 | orchestrator | changed: [testbed-manager] 2025-07-12 19:31:45.467724 | orchestrator | 2025-07-12 19:31:45.467733 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 19:31:45.467744 | orchestrator | Saturday 12 July 2025 19:31:42 +0000 (0:00:00.475) 0:00:06.065 ********* 2025-07-12 19:31:45.467754 | orchestrator | changed: [testbed-manager] 2025-07-12 19:31:45.467764 | orchestrator | 2025-07-12 19:31:45.467774 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 19:31:45.467786 | orchestrator | Saturday 12 July 2025 19:31:43 +0000 (0:00:01.082) 0:00:07.147 ********* 2025-07-12 19:31:45.467800 | orchestrator | ok: [testbed-manager] 2025-07-12 19:31:45.467814 | orchestrator | 2025-07-12 19:31:45.467831 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 19:31:45.467847 | orchestrator | Saturday 12 July 2025 19:31:44 +0000 (0:00:00.846) 0:00:07.993 ********* 2025-07-12 19:31:45.467864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-12 19:31:45.467880 | orchestrator | 2025-07-12 19:31:45.467925 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 19:31:45.467944 | orchestrator | Saturday 12 July 2025 19:31:44 +0000 (0:00:00.062) 0:00:08.056 ********* 2025-07-12 19:31:45.467957 | orchestrator | changed: [testbed-manager] 2025-07-12 19:31:45.467967 | orchestrator | 2025-07-12 19:31:45.467986 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:31:45.467998 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:31:45.468010 | orchestrator | 2025-07-12 19:31:45.468024 | orchestrator | 2025-07-12 19:31:45.468039 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:31:45.468069 | orchestrator | Saturday 12 July 2025 19:31:45 +0000 (0:00:01.007) 0:00:09.064 ********* 2025-07-12 19:31:45.468084 | orchestrator | =============================================================================== 2025-07-12 19:31:45.468095 | orchestrator | Gathering Facts --------------------------------------------------------- 3.68s 2025-07-12 19:31:45.468107 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-07-12 19:31:45.468124 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.01s 2025-07-12 19:31:45.468137 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.95s 2025-07-12 19:31:45.468154 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.85s 2025-07-12 19:31:45.468166 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.48s 2025-07-12 19:31:45.468193 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.44s 2025-07-12 19:31:45.468204 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-12 19:31:45.468215 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-07-12 19:31:45.468225 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-07-12 19:31:45.468236 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-12 19:31:45.468246 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.06s 2025-07-12 19:31:45.468256 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-12 19:31:45.636859 | orchestrator | + osism apply sshconfig 2025-07-12 19:31:57.276434 | orchestrator | 2025-07-12 19:31:57 | INFO  | Task 1f638143-1f56-449c-b055-458974559e7e (sshconfig) was prepared for execution. 2025-07-12 19:31:57.276541 | orchestrator | 2025-07-12 19:31:57 | INFO  | It takes a moment until task 1f638143-1f56-449c-b055-458974559e7e (sshconfig) has been started and output is visible here. 2025-07-12 19:32:08.794169 | orchestrator | 2025-07-12 19:32:08.794299 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-12 19:32:08.794324 | orchestrator | 2025-07-12 19:32:08.794343 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-12 19:32:08.794361 | orchestrator | Saturday 12 July 2025 19:32:01 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-07-12 19:32:08.794379 | orchestrator | ok: [testbed-manager] 2025-07-12 19:32:08.794398 | orchestrator | 2025-07-12 19:32:08.794416 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-12 19:32:08.794435 | orchestrator | Saturday 12 July 2025 19:32:01 +0000 (0:00:00.620) 0:00:00.779 ********* 2025-07-12 19:32:08.794454 | orchestrator | changed: [testbed-manager] 2025-07-12 19:32:08.794473 | orchestrator | 2025-07-12 19:32:08.794491 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-12 19:32:08.794509 | orchestrator | Saturday 12 July 2025 19:32:02 +0000 (0:00:00.488) 0:00:01.268 ********* 2025-07-12 19:32:08.794528 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-12 19:32:08.794547 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-12 19:32:08.794566 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-12 19:32:08.794585 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-12 19:32:08.794602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-12 19:32:08.794621 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-12 19:32:08.794642 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-12 19:32:08.794663 | orchestrator | 2025-07-12 19:32:08.794682 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-12 19:32:08.794703 | orchestrator | Saturday 12 July 2025 19:32:07 +0000 (0:00:05.626) 0:00:06.895 ********* 2025-07-12 19:32:08.794784 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:32:08.794807 | orchestrator | 2025-07-12 19:32:08.794827 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-12 19:32:08.794846 | orchestrator | Saturday 12 July 2025 19:32:07 +0000 (0:00:00.068) 0:00:06.964 ********* 2025-07-12 19:32:08.794867 | orchestrator | changed: [testbed-manager] 2025-07-12 19:32:08.794886 | orchestrator | 2025-07-12 19:32:08.794951 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:32:08.794967 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:32:08.794981 | orchestrator | 2025-07-12 19:32:08.794992 | orchestrator | 2025-07-12 19:32:08.795003 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:32:08.795014 | orchestrator | Saturday 12 July 2025 19:32:08 +0000 (0:00:00.607) 0:00:07.571 ********* 2025-07-12 19:32:08.795025 | orchestrator | =============================================================================== 2025-07-12 19:32:08.795036 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.63s 2025-07-12 19:32:08.795047 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2025-07-12 19:32:08.795058 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-07-12 19:32:08.795069 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-07-12 19:32:08.795080 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-07-12 19:32:09.058449 | orchestrator | + osism apply known-hosts 2025-07-12 19:32:20.966762 | orchestrator | 2025-07-12 19:32:20 | INFO  | Task 476d3d37-5f24-4874-87cc-b1d7d187b330 (known-hosts) was prepared for execution. 2025-07-12 19:32:20.966855 | orchestrator | 2025-07-12 19:32:20 | INFO  | It takes a moment until task 476d3d37-5f24-4874-87cc-b1d7d187b330 (known-hosts) has been started and output is visible here. 2025-07-12 19:32:35.452167 | orchestrator | 2025-07-12 19:32:35.452263 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-12 19:32:35.452279 | orchestrator | 2025-07-12 19:32:35.452291 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-12 19:32:35.452303 | orchestrator | Saturday 12 July 2025 19:32:23 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-07-12 19:32:35.452314 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 19:32:35.452326 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 19:32:35.452337 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 19:32:35.452348 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 19:32:35.452359 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 19:32:35.452370 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 19:32:35.452388 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 19:32:35.452407 | orchestrator | 2025-07-12 19:32:35.452424 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-12 19:32:35.452444 | orchestrator | Saturday 12 July 2025 19:32:29 +0000 (0:00:05.674) 0:00:05.811 ********* 2025-07-12 19:32:35.452463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 19:32:35.452482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 19:32:35.452501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 19:32:35.452519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 19:32:35.452568 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 19:32:35.452588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 19:32:35.452611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 19:32:35.452623 | orchestrator | 2025-07-12 19:32:35.452634 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:35.452662 | orchestrator | Saturday 12 July 2025 19:32:29 +0000 (0:00:00.134) 0:00:05.946 ********* 2025-07-12 19:32:35.452683 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtp3qetnbqWLUJ4RkyRy8EpU19BzWAuPIZOIc4pqsk5) 2025-07-12 19:32:35.452699 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFtbqnv4xfwQ/SE82l+AD41HjiTVcwC+peakARLauFrsnWDTiJ+ac6a5rfTLQiy6+jZKOl7OQY81/na0CRxeEyI0tg7UBpmHGWJB1WpLjl/ysemqkNQrMu3xdVWTQob9tN8wXRbTutNOKv8pA7YpoDfA6Qxd8WyqohEoW9Zgzv3zSdn/ktDDDAUb+ttvS9wxYQQ5P8bbemD14UHJns58FY05N0dMQrTKPl3o0gbBcvI7jh+97EGcz6i68vZ8qtDwIn0Vrc/Bvlaclnc0FM298pKK/JMnCbh1qzH4F6iy6VN7jjkLW+SATV1+56KWK3ZAxhQvGdriYBaGD6bfXji49ci9uESjAIlcXHO4xR55uR37QTZK4p9MkHXiqFiZWC2zQ1KNXtZbpPLuKjLT72F34coTIEDjSmt5/YzwWnxknfj+Q67aj7SQ2xR1pqmOhIqbUT8P6pIfnYt+eiljHl3T/AIHzPtxIpb6mnr+V6xVdxvcJR0JNBTIPYmJGMj2wkXj8=) 2025-07-12 19:32:35.452716 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMsQDTZgysR0tsYUhcwW/jUxnV5fc0eRm91So6lK9c+4NxQ593AM3UOqaI2C+AbvaleE1XTlpO+5yaWvScKGmsI=) 2025-07-12 19:32:35.452730 | orchestrator | 2025-07-12 19:32:35.452743 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:35.452756 | orchestrator | Saturday 12 July 2025 19:32:30 +0000 (0:00:01.073) 0:00:07.019 ********* 2025-07-12 19:32:35.452787 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDwk0iVGjWq02YYFM3Pdo5OJYXcf6MLJOyPXDrNosf6wAFdQiMVlBaQwb7omZ7l3HX7+TiSrL1L3/QvjSxJEBUZVy0VgM6kF2NlogDMAMO1keyudySmGjP5cPBjzKU01tTZlp0X4WxjeitsIG/M9sJeA0FejfEaC8TYogMPjo7WYsFl5kGo89I4suUdQCY9K2jMDETOXZb9nhhoeKcrSLyw+9FCe5BbWPzb+9lljqy9Lb4RskbO78NP8D7HzK7Q6GukkVRjDLR/mExtMC1kljvsh5+z7bFR1+9aVgyq7yQtXDOOolFE0SRWhZbIjVKyNEQ+Wt5A//qXA6lkZnz3hwWi78VJHg16dWGcVY+lF5pz0THaiKi8JUUgCuirVWww3tcFQLtBMJnu0ZyHU6rD8BgOKNuwhlvCtw/VRz7GuA4K7HzOHyDq7HZc3tgpRwUfZYghyf2xvW0t/eLYj0DeVcTQfIwcPP6HUpex+IulcZpl+FAObxpW9sXd/Kca5Gfsh8=) 2025-07-12 19:32:35.452800 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIi84Nl9A5TZRmpCsfCgjes283JSdI2eM4/VVcjdLpZd7WOu0ViPrPEkj+5Z/GIS6Wq3vomU3UwfSptVvGlY+n0=) 2025-07-12 19:32:35.452813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDB2vRHBtXbjbsDBFwKltghj3iWMEFNQjNbIBX2qNZ7A) 2025-07-12 19:32:35.452825 | orchestrator | 2025-07-12 19:32:35.452838 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:35.452850 | orchestrator | Saturday 12 July 2025 19:32:31 +0000 (0:00:00.935) 0:00:07.955 ********* 2025-07-12 19:32:35.452863 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnpyPhA5ecsXHaiu6XI4PrSTvzE9Np5yBVZ/qAfr7R+GNUDQ2QhCbieZGbLUxyohxCUTDC9EMsfme90l4uulzbmMf9/WMHYW/j2fDjQ8gFXx7X8rrJWuHvEmNhY1rZ7gkLWV4venOs3b8BuY97I4y/6NK633UM4w1I50aNBvwi/kK/lJUoK9bz+aM+IeLu66Yj5GPMTangnrt5v3KnyIqrouTdAtAWCgsfslVK2Xhuv4aRP57rEmEQVznXF/FMEd6zrM1K/CfDlGkedQbJVEBX0tAmQ0LBvKMBzogwbd5CaPnLuRgXob+T3fF4pnKVUGIXOmoD2ye9pCc00T9SNOCPosFKIES9IblujWxSqm1zmm8mS5AszdCQ56OzqbIc9EWJgAShctYCgf658b+afvvMXnIvsHkU7U4nZ7VQWk0RPyDnhfGyidCyZhflr9zOETC0lWqEarSBWEQP8elXgGKkmMFjX2fSueQesaby6LhbquMZq1MbLzxnAZ+2SkS9yFU=) 2025-07-12 19:32:35.452885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEGWogeBw/uoJybxVL3L8X87ta5+pUB0pOCwjt8qvPBCWbyEz1LnpYvLrI/edk0XX9gVBFESLeaTdlTWxwB87bE=) 2025-07-12 19:32:35.452898 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDNe0fIEw5me54XFNtxvE8Iv9oze0s2tYSen3IRa/8oy) 2025-07-12 19:32:35.452949 | orchestrator | 2025-07-12 19:32:35.452961 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:35.452973 | orchestrator | Saturday 12 July 2025 19:32:32 +0000 (0:00:00.918) 0:00:08.873 ********* 2025-07-12 19:32:35.453049 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0B6qgIPf+3XQaJS4P6f7N90t84BbmXm7QhpLv86pnHyVXb/4iA4IY09Wss1/j7IumxbGsVEPWPfq3xB3VTsU0LUjQshgYZ2hQjbk8jltQyaCICm39JITAg6HMQor+EDUjpH9qTi2S4zVvTwzawT81P1kGatT39QVpu6WVp6R8LEm9mYaLBCh9YUtAiYjIS8KB2Y1GmDkCIFU2zB/KZVIE3hXJDmNgfV2U2pvYVCK8mlWJfuPfYAsYKkHk9093zbatJehsOmeNT/LRf2cvpGfVh3e2Rpr+fUYsZu3gd8HFhpsJ4gPyfC09BhQY+1QZs87BWdYhOXJBqIGtWnlOI00WeJDBB8rVrcP1+DXPWnRGU32weEjEpWuX+Vm63M+A7ARtsiYIrbx3DhxGDCPibcxEGAEdodRJsy76n6Hv3xD4pYz2x6m8kMfzxsmhDHWpwOEb3PrkBq18uw2KxGtQ/PKGo5e8C+iTQ/WejNilZGPIWKjyieRnIBtweQFFyMJUHQM=) 2025-07-12 19:32:35.453064 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLyqxldXYeKPwmNGko5d4enC7gyQafqIYcKfdpcXhQwqBdYCrPSE7dcT99rk/ynJ6hUGawRnnQbIhma1H5ZD3Uk=) 2025-07-12 19:32:35.453076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILegUmd1kvjwXs9y5Eb2M4G0wb3XJ+wYuPRDS1UygwO6) 2025-07-12 19:32:35.453087 | orchestrator | 2025-07-12 19:32:35.453098 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:35.453109 | orchestrator | Saturday 12 July 2025 19:32:33 +0000 (0:00:00.947) 0:00:09.821 ********* 2025-07-12 19:32:35.453120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC32e+W1zwm/qHYHYVUQort6iPUvgoIQn+wOuM0Xmvt5PlPMaogPrT6juL19eNDe8ov4+2DzDB4LxOpwxu986qMEnshHN8KwM+hl6NOr4zAMYtcEsfj5ZSLbZg9TlLLKOqhrr+pPy0mVjfXRO7ievI/e7MX7WvDuNsceIExR0yPajA5WkpVlVnUu1f375nrKs/JYh0KEafeT9PWh4luGUJTnrT4vXDyKiIqgWhhsi2ATKkh5c64hVhlDTOn9fdzCpZh0xexymrzm2UzRs9WUvvMJuBw4nj4IKSSYWp5gW1ux4zME1wOthtPwCcxdJN2SK8SsI2PhqqgOv2evRKtThfY2fVNJLt5HIc38mv02A6hRo9dx/yd77zo1ZjJDxTLHcGfPpS8w5WprcrXoSMqDwM5n1qdaMGRqGloEkHMgduXmdI83q8C0MGyzGk1MPQsIl4zwb/Yxt5s9Zqv74q1WAcs2Qh0nncZQ+G+GRkMX9YBJzHYwkhD0p6OnWWPW2THVlE=) 2025-07-12 19:32:35.453132 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH3KLopjb93pgZS+VHU6rBcuTZXGK9JPtsAUndM1vVzug5F3AeVzU8AR7us9xhq+vd39emvQQn2YVDt/OibV/Qk=) 2025-07-12 19:32:35.453143 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrIW9CA8575vFX2hgNdxxowE/izff1NN7Cs/SRQCy7s) 2025-07-12 19:32:35.453154 | orchestrator | 2025-07-12 19:32:35.453165 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:35.453176 | orchestrator | Saturday 12 July 2025 19:32:34 +0000 (0:00:00.926) 0:00:10.747 ********* 2025-07-12 19:32:35.453197 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMbssKkH6cuVuwHFnZVp158imNTMU+CeTIgrnQmY7sE8McdagwfrOs7ghd7qPsGRf741yutcr6a1KBZC1urq7rOVVKFsF02BHiVffn0hL6j5dUmqlqDvA7lEMviIhJmbMp7TN/ptiNj/eSYE1IMltCWDAImC8M0VLU1YDmHrZIVHCWU5G62xzdWfQVUMof8J8fZR/x7zVOWJrDpfSLxu9jnXAAZFrHGfmwSwFYqYDIJuiqFkdPnX0DsRNmsky0ig11ru+Dr6GFdbcGOmezCpEIDR6nMjvqvol47uDETVeh9aNU0hocWYoT8bpgT/Dk2478iVsZJ06fGZBSHw1e3ySxNoTfrbzxzNClJgeMZX9SXDvhYU90kV2dYWqdtRU/zL3ToOaey/MafL6sJTjkMxwIp2VO+3zI1PovPRe1tNQRd8UD+uZalB3Xi6ACsHAYpurKx+xAGQ8GKJcI9HU/4DVrOwNdTrXNGZ5VxH3UiDtnDsBRI1FX1sLjdqXgwZJg58s=) 2025-07-12 19:32:45.980831 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNeOl9grh24YmQKhQ18d6yzN65P0g6lQL2m9fA/y7s5gX3y/IVB3dV68PQwek/FwSdi9H2ph44a6K8HR7UyWmNQ=) 2025-07-12 19:32:45.980965 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHKYSKnUODE3gsj5gR136rVZ578FKhF0dkjn0qZ7mP0i) 2025-07-12 19:32:45.980984 | orchestrator | 2025-07-12 19:32:45.980999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:45.981012 | orchestrator | Saturday 12 July 2025 19:32:35 +0000 (0:00:00.964) 0:00:11.712 ********* 2025-07-12 19:32:45.981026 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDr4NqLYdFQsJ6F/dTs1vPzv6DJzvYFiFSjmFg9VGafoz5b8/SiN21SD6JX0H1WsVLcvuErSb9hqHDU/JAUZ4rCMvuTVpJbuux4l87/tjTZQGC6rLU4ufie0TCJDKO1c6qMR4a+WKz7I6LLBsIf7Udz9VqjBtSUmhuDde8Oeg5nRePjyAPv+YYXxUbFH3fQGrqfNgQDSP4sNUj9JxfLzPDtxkZwCkxxE/YI99V3WfrP62LFg2+Abi8c4nFdKAHpyiDW2+EcUy6bPImVPMeR0044Mvh+TZRoX+sX4ccvMRiKpgbaKvoMIn+r8sas91cYu4XcnGZExS0P+CVnA7EhA64achP5faKBkGKOg2Dpbjbt4nX2kk9fbFc+lj7yXtqmM2arjle5ILqP//2GIas2+XM01EW//x7nbhh5RBgUw+pcPGrNkfaAT/YnIrQiMKYoMcrSoD3PLl3W8D3SGY6us9cb/XAjPpErKL9jVLJ2xKwn4vzWP4S3+pwOntq4b6hDluE=) 2025-07-12 19:32:45.981040 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaTpja/hik+qNqbTTSMuXLxnc88tfNI+U8aeouhu54apZOtJBlRgqJApfqNu0zUDJcAZUVfq28PP0wuUb1Wm3I=) 2025-07-12 19:32:45.981051 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGF4X7ty3f1ocUOvduebzagX72y5Na7Vs/OcBziBuP4d) 2025-07-12 19:32:45.981062 | orchestrator | 2025-07-12 19:32:45.981073 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-12 19:32:45.981085 | orchestrator | Saturday 12 July 2025 19:32:36 +0000 (0:00:01.019) 0:00:12.732 ********* 2025-07-12 19:32:45.981097 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 19:32:45.981108 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 19:32:45.981118 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 19:32:45.981129 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 19:32:45.981140 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 19:32:45.981151 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 19:32:45.981161 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 19:32:45.981172 | orchestrator | 2025-07-12 19:32:45.981202 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-12 19:32:45.981214 | orchestrator | Saturday 12 July 2025 19:32:41 +0000 (0:00:05.160) 0:00:17.892 ********* 2025-07-12 19:32:45.981226 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 19:32:45.981238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 19:32:45.981249 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 19:32:45.981260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 19:32:45.981271 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 19:32:45.981305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 19:32:45.981317 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 19:32:45.981327 | orchestrator | 2025-07-12 19:32:45.981338 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:45.981349 | orchestrator | Saturday 12 July 2025 19:32:41 +0000 (0:00:00.157) 0:00:18.050 ********* 2025-07-12 19:32:45.981360 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBtp3qetnbqWLUJ4RkyRy8EpU19BzWAuPIZOIc4pqsk5) 2025-07-12 19:32:45.981397 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFtbqnv4xfwQ/SE82l+AD41HjiTVcwC+peakARLauFrsnWDTiJ+ac6a5rfTLQiy6+jZKOl7OQY81/na0CRxeEyI0tg7UBpmHGWJB1WpLjl/ysemqkNQrMu3xdVWTQob9tN8wXRbTutNOKv8pA7YpoDfA6Qxd8WyqohEoW9Zgzv3zSdn/ktDDDAUb+ttvS9wxYQQ5P8bbemD14UHJns58FY05N0dMQrTKPl3o0gbBcvI7jh+97EGcz6i68vZ8qtDwIn0Vrc/Bvlaclnc0FM298pKK/JMnCbh1qzH4F6iy6VN7jjkLW+SATV1+56KWK3ZAxhQvGdriYBaGD6bfXji49ci9uESjAIlcXHO4xR55uR37QTZK4p9MkHXiqFiZWC2zQ1KNXtZbpPLuKjLT72F34coTIEDjSmt5/YzwWnxknfj+Q67aj7SQ2xR1pqmOhIqbUT8P6pIfnYt+eiljHl3T/AIHzPtxIpb6mnr+V6xVdxvcJR0JNBTIPYmJGMj2wkXj8=) 2025-07-12 19:32:45.981412 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMsQDTZgysR0tsYUhcwW/jUxnV5fc0eRm91So6lK9c+4NxQ593AM3UOqaI2C+AbvaleE1XTlpO+5yaWvScKGmsI=) 2025-07-12 19:32:45.981424 | orchestrator | 2025-07-12 19:32:45.981437 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:45.981450 | orchestrator | Saturday 12 July 2025 19:32:42 +0000 (0:00:01.057) 0:00:19.107 ********* 2025-07-12 19:32:45.981463 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDwk0iVGjWq02YYFM3Pdo5OJYXcf6MLJOyPXDrNosf6wAFdQiMVlBaQwb7omZ7l3HX7+TiSrL1L3/QvjSxJEBUZVy0VgM6kF2NlogDMAMO1keyudySmGjP5cPBjzKU01tTZlp0X4WxjeitsIG/M9sJeA0FejfEaC8TYogMPjo7WYsFl5kGo89I4suUdQCY9K2jMDETOXZb9nhhoeKcrSLyw+9FCe5BbWPzb+9lljqy9Lb4RskbO78NP8D7HzK7Q6GukkVRjDLR/mExtMC1kljvsh5+z7bFR1+9aVgyq7yQtXDOOolFE0SRWhZbIjVKyNEQ+Wt5A//qXA6lkZnz3hwWi78VJHg16dWGcVY+lF5pz0THaiKi8JUUgCuirVWww3tcFQLtBMJnu0ZyHU6rD8BgOKNuwhlvCtw/VRz7GuA4K7HzOHyDq7HZc3tgpRwUfZYghyf2xvW0t/eLYj0DeVcTQfIwcPP6HUpex+IulcZpl+FAObxpW9sXd/Kca5Gfsh8=) 2025-07-12 19:32:45.981476 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIi84Nl9A5TZRmpCsfCgjes283JSdI2eM4/VVcjdLpZd7WOu0ViPrPEkj+5Z/GIS6Wq3vomU3UwfSptVvGlY+n0=) 2025-07-12 19:32:45.981488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDB2vRHBtXbjbsDBFwKltghj3iWMEFNQjNbIBX2qNZ7A) 2025-07-12 19:32:45.981501 | orchestrator | 2025-07-12 19:32:45.981513 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:45.981526 | orchestrator | Saturday 12 July 2025 19:32:43 +0000 (0:00:01.033) 0:00:20.141 ********* 2025-07-12 19:32:45.981538 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDnpyPhA5ecsXHaiu6XI4PrSTvzE9Np5yBVZ/qAfr7R+GNUDQ2QhCbieZGbLUxyohxCUTDC9EMsfme90l4uulzbmMf9/WMHYW/j2fDjQ8gFXx7X8rrJWuHvEmNhY1rZ7gkLWV4venOs3b8BuY97I4y/6NK633UM4w1I50aNBvwi/kK/lJUoK9bz+aM+IeLu66Yj5GPMTangnrt5v3KnyIqrouTdAtAWCgsfslVK2Xhuv4aRP57rEmEQVznXF/FMEd6zrM1K/CfDlGkedQbJVEBX0tAmQ0LBvKMBzogwbd5CaPnLuRgXob+T3fF4pnKVUGIXOmoD2ye9pCc00T9SNOCPosFKIES9IblujWxSqm1zmm8mS5AszdCQ56OzqbIc9EWJgAShctYCgf658b+afvvMXnIvsHkU7U4nZ7VQWk0RPyDnhfGyidCyZhflr9zOETC0lWqEarSBWEQP8elXgGKkmMFjX2fSueQesaby6LhbquMZq1MbLzxnAZ+2SkS9yFU=) 2025-07-12 19:32:45.981551 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEGWogeBw/uoJybxVL3L8X87ta5+pUB0pOCwjt8qvPBCWbyEz1LnpYvLrI/edk0XX9gVBFESLeaTdlTWxwB87bE=) 2025-07-12 19:32:45.981572 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDNe0fIEw5me54XFNtxvE8Iv9oze0s2tYSen3IRa/8oy) 2025-07-12 19:32:45.981584 | orchestrator | 2025-07-12 19:32:45.981597 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:45.981610 | orchestrator | Saturday 12 July 2025 19:32:44 +0000 (0:00:01.040) 0:00:21.182 ********* 2025-07-12 19:32:45.981632 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0B6qgIPf+3XQaJS4P6f7N90t84BbmXm7QhpLv86pnHyVXb/4iA4IY09Wss1/j7IumxbGsVEPWPfq3xB3VTsU0LUjQshgYZ2hQjbk8jltQyaCICm39JITAg6HMQor+EDUjpH9qTi2S4zVvTwzawT81P1kGatT39QVpu6WVp6R8LEm9mYaLBCh9YUtAiYjIS8KB2Y1GmDkCIFU2zB/KZVIE3hXJDmNgfV2U2pvYVCK8mlWJfuPfYAsYKkHk9093zbatJehsOmeNT/LRf2cvpGfVh3e2Rpr+fUYsZu3gd8HFhpsJ4gPyfC09BhQY+1QZs87BWdYhOXJBqIGtWnlOI00WeJDBB8rVrcP1+DXPWnRGU32weEjEpWuX+Vm63M+A7ARtsiYIrbx3DhxGDCPibcxEGAEdodRJsy76n6Hv3xD4pYz2x6m8kMfzxsmhDHWpwOEb3PrkBq18uw2KxGtQ/PKGo5e8C+iTQ/WejNilZGPIWKjyieRnIBtweQFFyMJUHQM=) 2025-07-12 19:32:45.981644 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLyqxldXYeKPwmNGko5d4enC7gyQafqIYcKfdpcXhQwqBdYCrPSE7dcT99rk/ynJ6hUGawRnnQbIhma1H5ZD3Uk=) 2025-07-12 19:32:45.981667 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILegUmd1kvjwXs9y5Eb2M4G0wb3XJ+wYuPRDS1UygwO6) 2025-07-12 19:32:50.139786 | orchestrator | 2025-07-12 19:32:50.139883 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:50.139933 | orchestrator | Saturday 12 July 2025 19:32:45 +0000 (0:00:01.055) 0:00:22.237 ********* 2025-07-12 19:32:50.139949 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH3KLopjb93pgZS+VHU6rBcuTZXGK9JPtsAUndM1vVzug5F3AeVzU8AR7us9xhq+vd39emvQQn2YVDt/OibV/Qk=) 2025-07-12 19:32:50.139966 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC32e+W1zwm/qHYHYVUQort6iPUvgoIQn+wOuM0Xmvt5PlPMaogPrT6juL19eNDe8ov4+2DzDB4LxOpwxu986qMEnshHN8KwM+hl6NOr4zAMYtcEsfj5ZSLbZg9TlLLKOqhrr+pPy0mVjfXRO7ievI/e7MX7WvDuNsceIExR0yPajA5WkpVlVnUu1f375nrKs/JYh0KEafeT9PWh4luGUJTnrT4vXDyKiIqgWhhsi2ATKkh5c64hVhlDTOn9fdzCpZh0xexymrzm2UzRs9WUvvMJuBw4nj4IKSSYWp5gW1ux4zME1wOthtPwCcxdJN2SK8SsI2PhqqgOv2evRKtThfY2fVNJLt5HIc38mv02A6hRo9dx/yd77zo1ZjJDxTLHcGfPpS8w5WprcrXoSMqDwM5n1qdaMGRqGloEkHMgduXmdI83q8C0MGyzGk1MPQsIl4zwb/Yxt5s9Zqv74q1WAcs2Qh0nncZQ+G+GRkMX9YBJzHYwkhD0p6OnWWPW2THVlE=) 2025-07-12 19:32:50.139981 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrIW9CA8575vFX2hgNdxxowE/izff1NN7Cs/SRQCy7s) 2025-07-12 19:32:50.139994 | orchestrator | 2025-07-12 19:32:50.140005 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:50.140016 | orchestrator | Saturday 12 July 2025 19:32:47 +0000 (0:00:01.069) 0:00:23.307 ********* 2025-07-12 19:32:50.140028 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMbssKkH6cuVuwHFnZVp158imNTMU+CeTIgrnQmY7sE8McdagwfrOs7ghd7qPsGRf741yutcr6a1KBZC1urq7rOVVKFsF02BHiVffn0hL6j5dUmqlqDvA7lEMviIhJmbMp7TN/ptiNj/eSYE1IMltCWDAImC8M0VLU1YDmHrZIVHCWU5G62xzdWfQVUMof8J8fZR/x7zVOWJrDpfSLxu9jnXAAZFrHGfmwSwFYqYDIJuiqFkdPnX0DsRNmsky0ig11ru+Dr6GFdbcGOmezCpEIDR6nMjvqvol47uDETVeh9aNU0hocWYoT8bpgT/Dk2478iVsZJ06fGZBSHw1e3ySxNoTfrbzxzNClJgeMZX9SXDvhYU90kV2dYWqdtRU/zL3ToOaey/MafL6sJTjkMxwIp2VO+3zI1PovPRe1tNQRd8UD+uZalB3Xi6ACsHAYpurKx+xAGQ8GKJcI9HU/4DVrOwNdTrXNGZ5VxH3UiDtnDsBRI1FX1sLjdqXgwZJg58s=) 2025-07-12 19:32:50.140039 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNeOl9grh24YmQKhQ18d6yzN65P0g6lQL2m9fA/y7s5gX3y/IVB3dV68PQwek/FwSdi9H2ph44a6K8HR7UyWmNQ=) 2025-07-12 19:32:50.140072 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHKYSKnUODE3gsj5gR136rVZ578FKhF0dkjn0qZ7mP0i) 2025-07-12 19:32:50.140083 | orchestrator | 2025-07-12 19:32:50.140094 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:32:50.140105 | orchestrator | Saturday 12 July 2025 19:32:48 +0000 (0:00:01.044) 0:00:24.352 ********* 2025-07-12 19:32:50.140117 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGF4X7ty3f1ocUOvduebzagX72y5Na7Vs/OcBziBuP4d) 2025-07-12 19:32:50.140129 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDr4NqLYdFQsJ6F/dTs1vPzv6DJzvYFiFSjmFg9VGafoz5b8/SiN21SD6JX0H1WsVLcvuErSb9hqHDU/JAUZ4rCMvuTVpJbuux4l87/tjTZQGC6rLU4ufie0TCJDKO1c6qMR4a+WKz7I6LLBsIf7Udz9VqjBtSUmhuDde8Oeg5nRePjyAPv+YYXxUbFH3fQGrqfNgQDSP4sNUj9JxfLzPDtxkZwCkxxE/YI99V3WfrP62LFg2+Abi8c4nFdKAHpyiDW2+EcUy6bPImVPMeR0044Mvh+TZRoX+sX4ccvMRiKpgbaKvoMIn+r8sas91cYu4XcnGZExS0P+CVnA7EhA64achP5faKBkGKOg2Dpbjbt4nX2kk9fbFc+lj7yXtqmM2arjle5ILqP//2GIas2+XM01EW//x7nbhh5RBgUw+pcPGrNkfaAT/YnIrQiMKYoMcrSoD3PLl3W8D3SGY6us9cb/XAjPpErKL9jVLJ2xKwn4vzWP4S3+pwOntq4b6hDluE=) 2025-07-12 19:32:50.140140 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBaTpja/hik+qNqbTTSMuXLxnc88tfNI+U8aeouhu54apZOtJBlRgqJApfqNu0zUDJcAZUVfq28PP0wuUb1Wm3I=) 2025-07-12 19:32:50.140151 | orchestrator | 2025-07-12 19:32:50.140162 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-12 19:32:50.140173 | orchestrator | Saturday 12 July 2025 19:32:49 +0000 (0:00:01.043) 0:00:25.395 ********* 2025-07-12 19:32:50.140184 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 19:32:50.140196 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 19:32:50.140207 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 19:32:50.140218 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 19:32:50.140229 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 19:32:50.140240 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 19:32:50.140251 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 19:32:50.140262 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:32:50.140273 | orchestrator | 2025-07-12 19:32:50.140300 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-12 19:32:50.140312 | orchestrator | Saturday 12 July 2025 19:32:49 +0000 (0:00:00.162) 0:00:25.557 ********* 2025-07-12 19:32:50.140323 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:32:50.140334 | orchestrator | 2025-07-12 19:32:50.140344 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-12 19:32:50.140356 | orchestrator | Saturday 12 July 2025 19:32:49 +0000 (0:00:00.064) 0:00:25.622 ********* 2025-07-12 19:32:50.140367 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:32:50.140378 | orchestrator | 2025-07-12 19:32:50.140389 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-12 19:32:50.140400 | orchestrator | Saturday 12 July 2025 19:32:49 +0000 (0:00:00.056) 0:00:25.678 ********* 2025-07-12 19:32:50.140410 | orchestrator | changed: [testbed-manager] 2025-07-12 19:32:50.140421 | orchestrator | 2025-07-12 19:32:50.140432 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:32:50.140443 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:32:50.140456 | orchestrator | 2025-07-12 19:32:50.140466 | orchestrator | 2025-07-12 19:32:50.140477 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:32:50.140488 | orchestrator | Saturday 12 July 2025 19:32:49 +0000 (0:00:00.490) 0:00:26.168 ********* 2025-07-12 19:32:50.140499 | orchestrator | =============================================================================== 2025-07-12 19:32:50.140516 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.67s 2025-07-12 19:32:50.140527 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.16s 2025-07-12 19:32:50.140548 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-12 19:32:50.140560 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-12 19:32:50.140571 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-12 19:32:50.140582 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-12 19:32:50.140593 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 19:32:50.140604 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 19:32:50.140614 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-12 19:32:50.140625 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-07-12 19:32:50.140636 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-12 19:32:50.140647 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-07-12 19:32:50.140658 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-07-12 19:32:50.140668 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-07-12 19:32:50.140679 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-07-12 19:32:50.140690 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.92s 2025-07-12 19:32:50.140701 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-07-12 19:32:50.140711 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-07-12 19:32:50.140722 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-07-12 19:32:50.140734 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.13s 2025-07-12 19:32:50.413508 | orchestrator | + osism apply squid 2025-07-12 19:33:02.393798 | orchestrator | 2025-07-12 19:33:02 | INFO  | Task c33451ad-1ebb-4fbf-a12e-f0215e2114cd (squid) was prepared for execution. 2025-07-12 19:33:02.393971 | orchestrator | 2025-07-12 19:33:02 | INFO  | It takes a moment until task c33451ad-1ebb-4fbf-a12e-f0215e2114cd (squid) has been started and output is visible here. 2025-07-12 19:34:55.417255 | orchestrator | 2025-07-12 19:34:55.417363 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-12 19:34:55.417380 | orchestrator | 2025-07-12 19:34:55.417392 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-12 19:34:55.417404 | orchestrator | Saturday 12 July 2025 19:33:06 +0000 (0:00:00.121) 0:00:00.121 ********* 2025-07-12 19:34:55.417416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:34:55.417428 | orchestrator | 2025-07-12 19:34:55.417440 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-12 19:34:55.417467 | orchestrator | Saturday 12 July 2025 19:33:06 +0000 (0:00:00.070) 0:00:00.192 ********* 2025-07-12 19:34:55.417479 | orchestrator | ok: [testbed-manager] 2025-07-12 19:34:55.417491 | orchestrator | 2025-07-12 19:34:55.417503 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-12 19:34:55.417514 | orchestrator | Saturday 12 July 2025 19:33:07 +0000 (0:00:01.097) 0:00:01.289 ********* 2025-07-12 19:34:55.417525 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-12 19:34:55.417536 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-12 19:34:55.417548 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-12 19:34:55.417658 | orchestrator | 2025-07-12 19:34:55.417673 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-12 19:34:55.417684 | orchestrator | Saturday 12 July 2025 19:33:08 +0000 (0:00:01.013) 0:00:02.302 ********* 2025-07-12 19:34:55.417695 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-12 19:34:55.417706 | orchestrator | 2025-07-12 19:34:55.417717 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-12 19:34:55.417728 | orchestrator | Saturday 12 July 2025 19:33:09 +0000 (0:00:00.916) 0:00:03.219 ********* 2025-07-12 19:34:55.417738 | orchestrator | ok: [testbed-manager] 2025-07-12 19:34:55.417749 | orchestrator | 2025-07-12 19:34:55.417760 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-12 19:34:55.417771 | orchestrator | Saturday 12 July 2025 19:33:09 +0000 (0:00:00.317) 0:00:03.536 ********* 2025-07-12 19:34:55.417782 | orchestrator | changed: [testbed-manager] 2025-07-12 19:34:55.417793 | orchestrator | 2025-07-12 19:34:55.417804 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-12 19:34:55.417817 | orchestrator | Saturday 12 July 2025 19:33:10 +0000 (0:00:00.795) 0:00:04.332 ********* 2025-07-12 19:34:55.417829 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-12 19:34:55.417842 | orchestrator | ok: [testbed-manager] 2025-07-12 19:34:55.417854 | orchestrator | 2025-07-12 19:34:55.417866 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-12 19:34:55.417879 | orchestrator | Saturday 12 July 2025 19:33:41 +0000 (0:00:31.499) 0:00:35.832 ********* 2025-07-12 19:34:55.417914 | orchestrator | changed: [testbed-manager] 2025-07-12 19:34:55.417926 | orchestrator | 2025-07-12 19:34:55.417939 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-12 19:34:55.417951 | orchestrator | Saturday 12 July 2025 19:33:54 +0000 (0:00:12.468) 0:00:48.301 ********* 2025-07-12 19:34:55.417965 | orchestrator | Pausing for 60 seconds 2025-07-12 19:34:55.417978 | orchestrator | changed: [testbed-manager] 2025-07-12 19:34:55.417991 | orchestrator | 2025-07-12 19:34:55.418003 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-12 19:34:55.418051 | orchestrator | Saturday 12 July 2025 19:34:54 +0000 (0:01:00.073) 0:01:48.374 ********* 2025-07-12 19:34:55.418066 | orchestrator | ok: [testbed-manager] 2025-07-12 19:34:55.418078 | orchestrator | 2025-07-12 19:34:55.418090 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-12 19:34:55.418103 | orchestrator | Saturday 12 July 2025 19:34:54 +0000 (0:00:00.071) 0:01:48.446 ********* 2025-07-12 19:34:55.418115 | orchestrator | changed: [testbed-manager] 2025-07-12 19:34:55.418127 | orchestrator | 2025-07-12 19:34:55.418139 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:34:55.418152 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:34:55.418163 | orchestrator | 2025-07-12 19:34:55.418174 | orchestrator | 2025-07-12 19:34:55.418185 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:34:55.418196 | orchestrator | Saturday 12 July 2025 19:34:55 +0000 (0:00:00.633) 0:01:49.080 ********* 2025-07-12 19:34:55.418207 | orchestrator | =============================================================================== 2025-07-12 19:34:55.418218 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-07-12 19:34:55.418229 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.50s 2025-07-12 19:34:55.418240 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.47s 2025-07-12 19:34:55.418251 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.10s 2025-07-12 19:34:55.418262 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.01s 2025-07-12 19:34:55.418273 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.92s 2025-07-12 19:34:55.418292 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2025-07-12 19:34:55.418303 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.63s 2025-07-12 19:34:55.418314 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2025-07-12 19:34:55.418325 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-07-12 19:34:55.418336 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-07-12 19:34:55.660413 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 19:34:55.660527 | orchestrator | ++ semver latest 9.0.0 2025-07-12 19:34:55.713939 | orchestrator | + [[ -1 -lt 0 ]] 2025-07-12 19:34:55.714063 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 19:34:55.715112 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-12 19:35:07.700529 | orchestrator | 2025-07-12 19:35:07 | INFO  | Task 622bcee1-5ca7-43d6-b061-ac013f5380b6 (operator) was prepared for execution. 2025-07-12 19:35:07.700631 | orchestrator | 2025-07-12 19:35:07 | INFO  | It takes a moment until task 622bcee1-5ca7-43d6-b061-ac013f5380b6 (operator) has been started and output is visible here. 2025-07-12 19:35:23.350352 | orchestrator | 2025-07-12 19:35:23.350467 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-12 19:35:23.350483 | orchestrator | 2025-07-12 19:35:23.350495 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:35:23.350529 | orchestrator | Saturday 12 July 2025 19:35:11 +0000 (0:00:00.115) 0:00:00.115 ********* 2025-07-12 19:35:23.350541 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:35:23.350553 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:35:23.350564 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:35:23.350575 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:35:23.350586 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:35:23.350597 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:35:23.350608 | orchestrator | 2025-07-12 19:35:23.350619 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-12 19:35:23.350630 | orchestrator | Saturday 12 July 2025 19:35:15 +0000 (0:00:03.596) 0:00:03.712 ********* 2025-07-12 19:35:23.350641 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:35:23.350652 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:35:23.350663 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:35:23.350674 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:35:23.350685 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:35:23.350696 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:35:23.350707 | orchestrator | 2025-07-12 19:35:23.350718 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-12 19:35:23.350729 | orchestrator | 2025-07-12 19:35:23.350741 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 19:35:23.350752 | orchestrator | Saturday 12 July 2025 19:35:15 +0000 (0:00:00.688) 0:00:04.400 ********* 2025-07-12 19:35:23.350763 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:35:23.350774 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:35:23.350785 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:35:23.350796 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:35:23.350807 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:35:23.350817 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:35:23.350828 | orchestrator | 2025-07-12 19:35:23.350840 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 19:35:23.350851 | orchestrator | Saturday 12 July 2025 19:35:15 +0000 (0:00:00.130) 0:00:04.530 ********* 2025-07-12 19:35:23.350862 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:35:23.350899 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:35:23.350912 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:35:23.350924 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:35:23.350937 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:35:23.350949 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:35:23.350961 | orchestrator | 2025-07-12 19:35:23.350974 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 19:35:23.351012 | orchestrator | Saturday 12 July 2025 19:35:15 +0000 (0:00:00.155) 0:00:04.686 ********* 2025-07-12 19:35:23.351031 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:35:23.351050 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:35:23.351069 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:35:23.351087 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:35:23.351106 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:35:23.351124 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:35:23.351142 | orchestrator | 2025-07-12 19:35:23.351161 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 19:35:23.351182 | orchestrator | Saturday 12 July 2025 19:35:16 +0000 (0:00:00.595) 0:00:05.281 ********* 2025-07-12 19:35:23.351195 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:35:23.351208 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:35:23.351220 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:35:23.351233 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:35:23.351245 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:35:23.351256 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:35:23.351266 | orchestrator | 2025-07-12 19:35:23.351277 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 19:35:23.351288 | orchestrator | Saturday 12 July 2025 19:35:17 +0000 (0:00:00.847) 0:00:06.128 ********* 2025-07-12 19:35:23.351299 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-12 19:35:23.351310 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-12 19:35:23.351320 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-12 19:35:23.351333 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-12 19:35:23.351349 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-12 19:35:23.351367 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-12 19:35:23.351385 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-12 19:35:23.351403 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-12 19:35:23.351421 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-12 19:35:23.351440 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-12 19:35:23.351452 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-12 19:35:23.351463 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-12 19:35:23.351474 | orchestrator | 2025-07-12 19:35:23.351484 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 19:35:23.351495 | orchestrator | Saturday 12 July 2025 19:35:18 +0000 (0:00:01.166) 0:00:07.295 ********* 2025-07-12 19:35:23.351506 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:35:23.351517 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:35:23.351528 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:35:23.351538 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:35:23.351549 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:35:23.351560 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:35:23.351570 | orchestrator | 2025-07-12 19:35:23.351581 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 19:35:23.351593 | orchestrator | Saturday 12 July 2025 19:35:19 +0000 (0:00:01.296) 0:00:08.592 ********* 2025-07-12 19:35:23.351604 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-12 19:35:23.351638 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-12 19:35:23.351649 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-12 19:35:23.351660 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:35:23.351693 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:35:23.351705 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:35:23.351716 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:35:23.351727 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:35:23.351752 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:35:23.351763 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-12 19:35:23.351774 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-12 19:35:23.351785 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-12 19:35:23.351796 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-12 19:35:23.351806 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-12 19:35:23.351817 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-12 19:35:23.351827 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:35:23.351838 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:35:23.351849 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:35:23.351860 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:35:23.351897 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:35:23.351910 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:35:23.351920 | orchestrator | 2025-07-12 19:35:23.351931 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 19:35:23.351943 | orchestrator | Saturday 12 July 2025 19:35:21 +0000 (0:00:01.347) 0:00:09.939 ********* 2025-07-12 19:35:23.351954 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:35:23.351965 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:35:23.351976 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:35:23.351986 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:35:23.351997 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:35:23.352008 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:35:23.352018 | orchestrator | 2025-07-12 19:35:23.352029 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 19:35:23.352040 | orchestrator | Saturday 12 July 2025 19:35:21 +0000 (0:00:00.149) 0:00:10.088 ********* 2025-07-12 19:35:23.352050 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:35:23.352061 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:35:23.352072 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:35:23.352082 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:35:23.352093 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:35:23.352103 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:35:23.352114 | orchestrator | 2025-07-12 19:35:23.352125 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 19:35:23.352136 | orchestrator | Saturday 12 July 2025 19:35:21 +0000 (0:00:00.604) 0:00:10.693 ********* 2025-07-12 19:35:23.352147 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:35:23.352157 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:35:23.352168 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:35:23.352178 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:35:23.352189 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:35:23.352200 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:35:23.352224 | orchestrator | 2025-07-12 19:35:23.352236 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 19:35:23.352247 | orchestrator | Saturday 12 July 2025 19:35:22 +0000 (0:00:00.157) 0:00:10.850 ********* 2025-07-12 19:35:23.352258 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 19:35:23.352268 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 19:35:23.352279 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:35:23.352290 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 19:35:23.352301 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:35:23.352312 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:35:23.352323 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 19:35:23.352334 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:35:23.352370 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 19:35:23.352381 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:35:23.352392 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 19:35:23.352402 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:35:23.352413 | orchestrator | 2025-07-12 19:35:23.352424 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 19:35:23.352435 | orchestrator | Saturday 12 July 2025 19:35:22 +0000 (0:00:00.746) 0:00:11.596 ********* 2025-07-12 19:35:23.352445 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:35:23.352456 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:35:23.352467 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:35:23.352477 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:35:23.352488 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:35:23.352498 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:35:23.352509 | orchestrator | 2025-07-12 19:35:23.352529 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 19:35:23.352541 | orchestrator | Saturday 12 July 2025 19:35:23 +0000 (0:00:00.145) 0:00:11.742 ********* 2025-07-12 19:35:23.352551 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:35:23.352562 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:35:23.352573 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:35:23.352583 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:35:23.352594 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:35:23.352604 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:35:23.352615 | orchestrator | 2025-07-12 19:35:23.352626 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 19:35:23.352636 | orchestrator | Saturday 12 July 2025 19:35:23 +0000 (0:00:00.145) 0:00:11.887 ********* 2025-07-12 19:35:23.352647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:35:23.352658 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:35:23.352669 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:35:23.352679 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:35:23.352698 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:35:24.478493 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:35:24.478620 | orchestrator | 2025-07-12 19:35:24.478662 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 19:35:24.478677 | orchestrator | Saturday 12 July 2025 19:35:23 +0000 (0:00:00.141) 0:00:12.029 ********* 2025-07-12 19:35:24.478689 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:35:24.478700 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:35:24.478711 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:35:24.478722 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:35:24.478733 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:35:24.478744 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:35:24.478754 | orchestrator | 2025-07-12 19:35:24.478766 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 19:35:24.478777 | orchestrator | Saturday 12 July 2025 19:35:24 +0000 (0:00:00.699) 0:00:12.729 ********* 2025-07-12 19:35:24.478787 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:35:24.478798 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:35:24.478809 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:35:24.478820 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:35:24.478831 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:35:24.478842 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:35:24.478853 | orchestrator | 2025-07-12 19:35:24.478864 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:35:24.478955 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:35:24.478968 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:35:24.479003 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:35:24.479015 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:35:24.479026 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:35:24.479039 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:35:24.479052 | orchestrator | 2025-07-12 19:35:24.479064 | orchestrator | 2025-07-12 19:35:24.479077 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:35:24.479090 | orchestrator | Saturday 12 July 2025 19:35:24 +0000 (0:00:00.217) 0:00:12.946 ********* 2025-07-12 19:35:24.479102 | orchestrator | =============================================================================== 2025-07-12 19:35:24.479114 | orchestrator | Gathering Facts --------------------------------------------------------- 3.60s 2025-07-12 19:35:24.479127 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.35s 2025-07-12 19:35:24.479140 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.30s 2025-07-12 19:35:24.479153 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-07-12 19:35:24.479164 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.85s 2025-07-12 19:35:24.479177 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.75s 2025-07-12 19:35:24.479189 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.70s 2025-07-12 19:35:24.479201 | orchestrator | Do not require tty for all users ---------------------------------------- 0.69s 2025-07-12 19:35:24.479213 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-07-12 19:35:24.479224 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-07-12 19:35:24.479237 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-07-12 19:35:24.479249 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-07-12 19:35:24.479262 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-07-12 19:35:24.479274 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-07-12 19:35:24.479286 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-07-12 19:35:24.479298 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-07-12 19:35:24.479310 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-07-12 19:35:24.479323 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2025-07-12 19:35:24.726790 | orchestrator | + osism apply --environment custom facts 2025-07-12 19:35:26.466304 | orchestrator | 2025-07-12 19:35:26 | INFO  | Trying to run play facts in environment custom 2025-07-12 19:35:36.541323 | orchestrator | 2025-07-12 19:35:36 | INFO  | Task f6fcd1ea-bb15-4276-83ff-d0ea6e855898 (facts) was prepared for execution. 2025-07-12 19:35:36.541425 | orchestrator | 2025-07-12 19:35:36 | INFO  | It takes a moment until task f6fcd1ea-bb15-4276-83ff-d0ea6e855898 (facts) has been started and output is visible here. 2025-07-12 19:36:18.877307 | orchestrator | 2025-07-12 19:36:18.877421 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-12 19:36:18.877439 | orchestrator | 2025-07-12 19:36:18.877451 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 19:36:18.877463 | orchestrator | Saturday 12 July 2025 19:35:40 +0000 (0:00:00.090) 0:00:00.090 ********* 2025-07-12 19:36:18.877499 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:18.877512 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:18.877524 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:18.877534 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:18.877545 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:18.877556 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:18.877566 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:18.877577 | orchestrator | 2025-07-12 19:36:18.877588 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-12 19:36:18.877599 | orchestrator | Saturday 12 July 2025 19:35:42 +0000 (0:00:01.438) 0:00:01.529 ********* 2025-07-12 19:36:18.877610 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:18.877620 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:18.877631 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:18.877642 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:18.877653 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:18.877663 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:18.877674 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:18.877685 | orchestrator | 2025-07-12 19:36:18.877695 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-12 19:36:18.877706 | orchestrator | 2025-07-12 19:36:18.877717 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 19:36:18.877728 | orchestrator | Saturday 12 July 2025 19:35:43 +0000 (0:00:01.209) 0:00:02.738 ********* 2025-07-12 19:36:18.877738 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.877749 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.877760 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.877771 | orchestrator | 2025-07-12 19:36:18.877781 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 19:36:18.877793 | orchestrator | Saturday 12 July 2025 19:35:43 +0000 (0:00:00.113) 0:00:02.852 ********* 2025-07-12 19:36:18.877803 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.877814 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.877825 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.877836 | orchestrator | 2025-07-12 19:36:18.877875 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 19:36:18.877888 | orchestrator | Saturday 12 July 2025 19:35:43 +0000 (0:00:00.253) 0:00:03.106 ********* 2025-07-12 19:36:18.877901 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.877913 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.877926 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.877938 | orchestrator | 2025-07-12 19:36:18.877950 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 19:36:18.877964 | orchestrator | Saturday 12 July 2025 19:35:43 +0000 (0:00:00.201) 0:00:03.307 ********* 2025-07-12 19:36:18.877978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:18.877991 | orchestrator | 2025-07-12 19:36:18.878004 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 19:36:18.878068 | orchestrator | Saturday 12 July 2025 19:35:44 +0000 (0:00:00.148) 0:00:03.456 ********* 2025-07-12 19:36:18.878082 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.878095 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.878107 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.878119 | orchestrator | 2025-07-12 19:36:18.878132 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 19:36:18.878144 | orchestrator | Saturday 12 July 2025 19:35:44 +0000 (0:00:00.444) 0:00:03.900 ********* 2025-07-12 19:36:18.878155 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:18.878165 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:18.878176 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:18.878187 | orchestrator | 2025-07-12 19:36:18.878198 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 19:36:18.878217 | orchestrator | Saturday 12 July 2025 19:35:44 +0000 (0:00:00.109) 0:00:04.010 ********* 2025-07-12 19:36:18.878228 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:18.878239 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:18.878250 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:18.878261 | orchestrator | 2025-07-12 19:36:18.878272 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 19:36:18.878283 | orchestrator | Saturday 12 July 2025 19:35:45 +0000 (0:00:01.076) 0:00:05.087 ********* 2025-07-12 19:36:18.878293 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.878304 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.878315 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.878326 | orchestrator | 2025-07-12 19:36:18.878337 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 19:36:18.878348 | orchestrator | Saturday 12 July 2025 19:35:46 +0000 (0:00:00.469) 0:00:05.557 ********* 2025-07-12 19:36:18.878358 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:18.878369 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:18.878380 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:18.878391 | orchestrator | 2025-07-12 19:36:18.878417 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 19:36:18.878429 | orchestrator | Saturday 12 July 2025 19:35:47 +0000 (0:00:01.067) 0:00:06.624 ********* 2025-07-12 19:36:18.878440 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:18.878451 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:18.878462 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:18.878473 | orchestrator | 2025-07-12 19:36:18.878484 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-12 19:36:18.878495 | orchestrator | Saturday 12 July 2025 19:36:02 +0000 (0:00:15.614) 0:00:22.239 ********* 2025-07-12 19:36:18.878505 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:18.878516 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:18.878527 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:18.878538 | orchestrator | 2025-07-12 19:36:18.878549 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-12 19:36:18.878576 | orchestrator | Saturday 12 July 2025 19:36:02 +0000 (0:00:00.111) 0:00:22.350 ********* 2025-07-12 19:36:18.878588 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:18.878599 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:18.878615 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:18.878625 | orchestrator | 2025-07-12 19:36:18.878636 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 19:36:18.878647 | orchestrator | Saturday 12 July 2025 19:36:09 +0000 (0:00:07.035) 0:00:29.385 ********* 2025-07-12 19:36:18.878658 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.878669 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.878680 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.878690 | orchestrator | 2025-07-12 19:36:18.878701 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 19:36:18.878712 | orchestrator | Saturday 12 July 2025 19:36:10 +0000 (0:00:00.438) 0:00:29.824 ********* 2025-07-12 19:36:18.878723 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-12 19:36:18.878734 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-12 19:36:18.878745 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-12 19:36:18.878755 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-12 19:36:18.878766 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-12 19:36:18.878777 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-12 19:36:18.878787 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-12 19:36:18.878798 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-12 19:36:18.878808 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-12 19:36:18.878826 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-12 19:36:18.878836 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-12 19:36:18.878867 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-12 19:36:18.878878 | orchestrator | 2025-07-12 19:36:18.878889 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 19:36:18.878900 | orchestrator | Saturday 12 July 2025 19:36:13 +0000 (0:00:03.441) 0:00:33.266 ********* 2025-07-12 19:36:18.878910 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.878921 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.878932 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.878943 | orchestrator | 2025-07-12 19:36:18.878954 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:36:18.878964 | orchestrator | 2025-07-12 19:36:18.878975 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:36:18.878986 | orchestrator | Saturday 12 July 2025 19:36:14 +0000 (0:00:01.167) 0:00:34.433 ********* 2025-07-12 19:36:18.878997 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:18.879008 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:18.879019 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:18.879030 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:18.879041 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:18.879051 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:18.879062 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:18.879073 | orchestrator | 2025-07-12 19:36:18.879084 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:36:18.879096 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:36:18.879107 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:36:18.879119 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:36:18.879130 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:36:18.879141 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:36:18.879152 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:36:18.879163 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:36:18.879174 | orchestrator | 2025-07-12 19:36:18.879185 | orchestrator | 2025-07-12 19:36:18.879196 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:36:18.879206 | orchestrator | Saturday 12 July 2025 19:36:18 +0000 (0:00:03.868) 0:00:38.301 ********* 2025-07-12 19:36:18.879218 | orchestrator | =============================================================================== 2025-07-12 19:36:18.879228 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.61s 2025-07-12 19:36:18.879239 | orchestrator | Install required packages (Debian) -------------------------------------- 7.04s 2025-07-12 19:36:18.879250 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.87s 2025-07-12 19:36:18.879261 | orchestrator | Copy fact files --------------------------------------------------------- 3.44s 2025-07-12 19:36:18.879272 | orchestrator | Create custom facts directory ------------------------------------------- 1.44s 2025-07-12 19:36:18.879282 | orchestrator | Copy fact file ---------------------------------------------------------- 1.21s 2025-07-12 19:36:18.879299 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.17s 2025-07-12 19:36:19.064227 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2025-07-12 19:36:19.064353 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-07-12 19:36:19.064369 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-07-12 19:36:19.064381 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-07-12 19:36:19.064392 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-07-12 19:36:19.064403 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.25s 2025-07-12 19:36:19.064414 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-07-12 19:36:19.064425 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-07-12 19:36:19.064436 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-07-12 19:36:19.064448 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-07-12 19:36:19.064459 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-07-12 19:36:19.313431 | orchestrator | + osism apply bootstrap 2025-07-12 19:36:31.255419 | orchestrator | 2025-07-12 19:36:31 | INFO  | Task b9b7d090-5207-479f-b409-4020043fb26a (bootstrap) was prepared for execution. 2025-07-12 19:36:31.255534 | orchestrator | 2025-07-12 19:36:31 | INFO  | It takes a moment until task b9b7d090-5207-479f-b409-4020043fb26a (bootstrap) has been started and output is visible here. 2025-07-12 19:36:46.080252 | orchestrator | 2025-07-12 19:36:46.080364 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-12 19:36:46.080383 | orchestrator | 2025-07-12 19:36:46.080395 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-12 19:36:46.080407 | orchestrator | Saturday 12 July 2025 19:36:35 +0000 (0:00:00.122) 0:00:00.122 ********* 2025-07-12 19:36:46.080419 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:46.080431 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:46.080443 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:46.080453 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:46.080464 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:46.080475 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:46.080485 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:46.080496 | orchestrator | 2025-07-12 19:36:46.080507 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:36:46.080518 | orchestrator | 2025-07-12 19:36:46.080529 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:36:46.080540 | orchestrator | Saturday 12 July 2025 19:36:35 +0000 (0:00:00.185) 0:00:00.308 ********* 2025-07-12 19:36:46.080551 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:46.080562 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:46.080573 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:46.080584 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:46.080594 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:46.080605 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:46.080616 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:46.080626 | orchestrator | 2025-07-12 19:36:46.080637 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-12 19:36:46.080648 | orchestrator | 2025-07-12 19:36:46.080659 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:36:46.080670 | orchestrator | Saturday 12 July 2025 19:36:38 +0000 (0:00:03.546) 0:00:03.854 ********* 2025-07-12 19:36:46.080681 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 19:36:46.080693 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 19:36:46.080704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-12 19:36:46.080715 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 19:36:46.080743 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 19:36:46.080755 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 19:36:46.080766 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-12 19:36:46.080777 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 19:36:46.080788 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 19:36:46.080798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 19:36:46.080809 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 19:36:46.080820 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 19:36:46.080863 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-12 19:36:46.080875 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 19:36:46.080886 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 19:36:46.080896 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 19:36:46.080907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 19:36:46.080918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 19:36:46.080929 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-12 19:36:46.080940 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 19:36:46.080951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-12 19:36:46.080962 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 19:36:46.080973 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 19:36:46.080983 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:46.080994 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:36:46.081005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-12 19:36:46.081016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 19:36:46.081026 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-12 19:36:46.081037 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 19:36:46.081048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 19:36:46.081059 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 19:36:46.081070 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-12 19:36:46.081092 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-12 19:36:46.081104 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-12 19:36:46.081115 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 19:36:46.081126 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:36:46.081136 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 19:36:46.081147 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-12 19:36:46.081158 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 19:36:46.081169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 19:36:46.081179 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 19:36:46.081190 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-12 19:36:46.081201 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:36:46.081212 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-12 19:36:46.081222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 19:36:46.081233 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 19:36:46.081259 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-12 19:36:46.081271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 19:36:46.081281 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-12 19:36:46.081300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-12 19:36:46.081311 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:46.081322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 19:36:46.081333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-12 19:36:46.081344 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:46.081355 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-12 19:36:46.081366 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:46.081376 | orchestrator | 2025-07-12 19:36:46.081387 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-12 19:36:46.081398 | orchestrator | 2025-07-12 19:36:46.081409 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-12 19:36:46.081420 | orchestrator | Saturday 12 July 2025 19:36:39 +0000 (0:00:00.415) 0:00:04.270 ********* 2025-07-12 19:36:46.081431 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:46.081441 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:46.081452 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:46.081463 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:46.081474 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:46.081485 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:46.081495 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:46.081506 | orchestrator | 2025-07-12 19:36:46.081517 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-12 19:36:46.081528 | orchestrator | Saturday 12 July 2025 19:36:40 +0000 (0:00:01.216) 0:00:05.486 ********* 2025-07-12 19:36:46.081539 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:46.081550 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:46.081561 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:46.081571 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:46.081582 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:46.081593 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:46.081603 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:46.081614 | orchestrator | 2025-07-12 19:36:46.081625 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-12 19:36:46.081636 | orchestrator | Saturday 12 July 2025 19:36:41 +0000 (0:00:01.148) 0:00:06.634 ********* 2025-07-12 19:36:46.081648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:46.081662 | orchestrator | 2025-07-12 19:36:46.081673 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-12 19:36:46.081684 | orchestrator | Saturday 12 July 2025 19:36:41 +0000 (0:00:00.237) 0:00:06.872 ********* 2025-07-12 19:36:46.081695 | orchestrator | changed: [testbed-manager] 2025-07-12 19:36:46.081706 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:46.081717 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:46.081727 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:46.081738 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:46.081749 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:46.081760 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:46.081770 | orchestrator | 2025-07-12 19:36:46.081781 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-12 19:36:46.081792 | orchestrator | Saturday 12 July 2025 19:36:43 +0000 (0:00:01.921) 0:00:08.793 ********* 2025-07-12 19:36:46.081803 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:46.081815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:46.081828 | orchestrator | 2025-07-12 19:36:46.081857 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-12 19:36:46.081868 | orchestrator | Saturday 12 July 2025 19:36:43 +0000 (0:00:00.254) 0:00:09.048 ********* 2025-07-12 19:36:46.081884 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:46.081895 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:46.081906 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:46.081917 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:46.081932 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:46.081943 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:46.081955 | orchestrator | 2025-07-12 19:36:46.081966 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-12 19:36:46.081977 | orchestrator | Saturday 12 July 2025 19:36:45 +0000 (0:00:01.076) 0:00:10.125 ********* 2025-07-12 19:36:46.081988 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:46.081998 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:46.082009 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:46.082091 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:46.082104 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:46.082114 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:46.082125 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:46.082136 | orchestrator | 2025-07-12 19:36:46.082147 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-12 19:36:46.082158 | orchestrator | Saturday 12 July 2025 19:36:45 +0000 (0:00:00.533) 0:00:10.659 ********* 2025-07-12 19:36:46.082169 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:36:46.082179 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:36:46.082190 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:36:46.082201 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:46.082211 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:46.082222 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:46.082232 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:46.082243 | orchestrator | 2025-07-12 19:36:46.082254 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 19:36:46.082266 | orchestrator | Saturday 12 July 2025 19:36:45 +0000 (0:00:00.412) 0:00:11.071 ********* 2025-07-12 19:36:46.082276 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:46.082287 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:36:46.082307 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:36:58.618120 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:36:58.618242 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:58.618259 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:58.618271 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:58.618283 | orchestrator | 2025-07-12 19:36:58.618296 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 19:36:58.618309 | orchestrator | Saturday 12 July 2025 19:36:46 +0000 (0:00:00.191) 0:00:11.263 ********* 2025-07-12 19:36:58.618322 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:58.618352 | orchestrator | 2025-07-12 19:36:58.618364 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 19:36:58.618376 | orchestrator | Saturday 12 July 2025 19:36:46 +0000 (0:00:00.279) 0:00:11.543 ********* 2025-07-12 19:36:58.618387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:58.618399 | orchestrator | 2025-07-12 19:36:58.618410 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 19:36:58.618421 | orchestrator | Saturday 12 July 2025 19:36:46 +0000 (0:00:00.283) 0:00:11.826 ********* 2025-07-12 19:36:58.618432 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.618444 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.618454 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.618490 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.618502 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.618513 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.618524 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.618534 | orchestrator | 2025-07-12 19:36:58.618545 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 19:36:58.618556 | orchestrator | Saturday 12 July 2025 19:36:48 +0000 (0:00:01.431) 0:00:13.257 ********* 2025-07-12 19:36:58.618567 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:58.618578 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:36:58.618589 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:36:58.618600 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:36:58.618612 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:58.618625 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:58.618637 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:58.618649 | orchestrator | 2025-07-12 19:36:58.618661 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 19:36:58.618673 | orchestrator | Saturday 12 July 2025 19:36:48 +0000 (0:00:00.197) 0:00:13.455 ********* 2025-07-12 19:36:58.618685 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.618697 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.618709 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.618721 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.618734 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.618746 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.618757 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.618770 | orchestrator | 2025-07-12 19:36:58.618782 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 19:36:58.618794 | orchestrator | Saturday 12 July 2025 19:36:49 +0000 (0:00:01.370) 0:00:14.825 ********* 2025-07-12 19:36:58.618806 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:58.618818 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:36:58.618854 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:36:58.618866 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:36:58.618878 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:58.618890 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:58.618902 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:58.618914 | orchestrator | 2025-07-12 19:36:58.618927 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 19:36:58.618941 | orchestrator | Saturday 12 July 2025 19:36:49 +0000 (0:00:00.193) 0:00:15.019 ********* 2025-07-12 19:36:58.618954 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.618966 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:58.618977 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:58.618988 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:58.618999 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:58.619010 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:58.619020 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:58.619031 | orchestrator | 2025-07-12 19:36:58.619042 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 19:36:58.619053 | orchestrator | Saturday 12 July 2025 19:36:50 +0000 (0:00:00.570) 0:00:15.590 ********* 2025-07-12 19:36:58.619064 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.619075 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:58.619085 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:58.619096 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:58.619107 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:58.619118 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:58.619129 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:58.619139 | orchestrator | 2025-07-12 19:36:58.619150 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 19:36:58.619161 | orchestrator | Saturday 12 July 2025 19:36:51 +0000 (0:00:01.077) 0:00:16.667 ********* 2025-07-12 19:36:58.619172 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.619191 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.619202 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.619213 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.619224 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.619235 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.619245 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.619256 | orchestrator | 2025-07-12 19:36:58.619267 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 19:36:58.619278 | orchestrator | Saturday 12 July 2025 19:36:52 +0000 (0:00:01.037) 0:00:17.705 ********* 2025-07-12 19:36:58.619307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:58.619318 | orchestrator | 2025-07-12 19:36:58.619330 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 19:36:58.619341 | orchestrator | Saturday 12 July 2025 19:36:52 +0000 (0:00:00.368) 0:00:18.074 ********* 2025-07-12 19:36:58.619351 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:58.619362 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:58.619373 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:36:58.619384 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:36:58.619394 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:36:58.619405 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:58.619415 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:58.619426 | orchestrator | 2025-07-12 19:36:58.619437 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 19:36:58.619448 | orchestrator | Saturday 12 July 2025 19:36:54 +0000 (0:00:01.290) 0:00:19.364 ********* 2025-07-12 19:36:58.619459 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.619470 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.619480 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.619491 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.619502 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.619513 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.619523 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.619534 | orchestrator | 2025-07-12 19:36:58.619544 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 19:36:58.619555 | orchestrator | Saturday 12 July 2025 19:36:54 +0000 (0:00:00.200) 0:00:19.565 ********* 2025-07-12 19:36:58.619566 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.619577 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.619587 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.619598 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.619650 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.619662 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.619673 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.619684 | orchestrator | 2025-07-12 19:36:58.619695 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 19:36:58.619706 | orchestrator | Saturday 12 July 2025 19:36:54 +0000 (0:00:00.214) 0:00:19.779 ********* 2025-07-12 19:36:58.619716 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.619727 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.619738 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.619748 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.619759 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.619770 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.619781 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.619791 | orchestrator | 2025-07-12 19:36:58.619802 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 19:36:58.619814 | orchestrator | Saturday 12 July 2025 19:36:54 +0000 (0:00:00.246) 0:00:20.026 ********* 2025-07-12 19:36:58.619854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:36:58.619876 | orchestrator | 2025-07-12 19:36:58.619887 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 19:36:58.619898 | orchestrator | Saturday 12 July 2025 19:36:55 +0000 (0:00:00.262) 0:00:20.288 ********* 2025-07-12 19:36:58.619909 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.619920 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.619931 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.619942 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.619952 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.619963 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.619974 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.619985 | orchestrator | 2025-07-12 19:36:58.619996 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 19:36:58.620007 | orchestrator | Saturday 12 July 2025 19:36:55 +0000 (0:00:00.550) 0:00:20.839 ********* 2025-07-12 19:36:58.620017 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:36:58.620028 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:36:58.620039 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:36:58.620050 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:36:58.620066 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:36:58.620077 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:36:58.620088 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:36:58.620099 | orchestrator | 2025-07-12 19:36:58.620109 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 19:36:58.620121 | orchestrator | Saturday 12 July 2025 19:36:55 +0000 (0:00:00.199) 0:00:21.038 ********* 2025-07-12 19:36:58.620131 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.620142 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:58.620153 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:36:58.620164 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.620175 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:36:58.620186 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.620196 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.620207 | orchestrator | 2025-07-12 19:36:58.620218 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 19:36:58.620229 | orchestrator | Saturday 12 July 2025 19:36:56 +0000 (0:00:01.029) 0:00:22.068 ********* 2025-07-12 19:36:58.620240 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.620251 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:36:58.620262 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:36:58.620273 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:36:58.620283 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.620294 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.620305 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:36:58.620316 | orchestrator | 2025-07-12 19:36:58.620327 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 19:36:58.620338 | orchestrator | Saturday 12 July 2025 19:36:57 +0000 (0:00:00.576) 0:00:22.644 ********* 2025-07-12 19:36:58.620349 | orchestrator | ok: [testbed-manager] 2025-07-12 19:36:58.620360 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:36:58.620371 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:36:58.620382 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:36:58.620400 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:37:36.742280 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743185 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:37:36.743214 | orchestrator | 2025-07-12 19:37:36.743226 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 19:37:36.743237 | orchestrator | Saturday 12 July 2025 19:36:58 +0000 (0:00:01.075) 0:00:23.720 ********* 2025-07-12 19:37:36.743247 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.743256 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743265 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.743273 | orchestrator | changed: [testbed-manager] 2025-07-12 19:37:36.743283 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:37:36.743316 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:37:36.743325 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:37:36.743334 | orchestrator | 2025-07-12 19:37:36.743343 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-12 19:37:36.743352 | orchestrator | Saturday 12 July 2025 19:37:14 +0000 (0:00:15.945) 0:00:39.665 ********* 2025-07-12 19:37:36.743361 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.743370 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.743379 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.743387 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.743396 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.743404 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.743413 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743422 | orchestrator | 2025-07-12 19:37:36.743430 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-12 19:37:36.743439 | orchestrator | Saturday 12 July 2025 19:37:14 +0000 (0:00:00.239) 0:00:39.905 ********* 2025-07-12 19:37:36.743448 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.743457 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.743465 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.743474 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.743482 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.743491 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.743500 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743508 | orchestrator | 2025-07-12 19:37:36.743517 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-12 19:37:36.743526 | orchestrator | Saturday 12 July 2025 19:37:14 +0000 (0:00:00.209) 0:00:40.115 ********* 2025-07-12 19:37:36.743534 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.743543 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.743551 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.743560 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.743568 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.743577 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.743585 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743594 | orchestrator | 2025-07-12 19:37:36.743603 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-12 19:37:36.743612 | orchestrator | Saturday 12 July 2025 19:37:15 +0000 (0:00:00.232) 0:00:40.347 ********* 2025-07-12 19:37:36.743623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:37:36.743635 | orchestrator | 2025-07-12 19:37:36.743644 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-12 19:37:36.743653 | orchestrator | Saturday 12 July 2025 19:37:15 +0000 (0:00:00.254) 0:00:40.602 ********* 2025-07-12 19:37:36.743661 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.743670 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.743678 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.743687 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.743695 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.743704 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743713 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.743721 | orchestrator | 2025-07-12 19:37:36.743730 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-12 19:37:36.743739 | orchestrator | Saturday 12 July 2025 19:37:17 +0000 (0:00:01.684) 0:00:42.287 ********* 2025-07-12 19:37:36.743748 | orchestrator | changed: [testbed-manager] 2025-07-12 19:37:36.743756 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:37:36.743765 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:37:36.743774 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:37:36.743782 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:37:36.743791 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:37:36.743799 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:37:36.743815 | orchestrator | 2025-07-12 19:37:36.743860 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-12 19:37:36.743869 | orchestrator | Saturday 12 July 2025 19:37:18 +0000 (0:00:01.048) 0:00:43.335 ********* 2025-07-12 19:37:36.743878 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.743887 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.743895 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.743904 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.743913 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.743921 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.743930 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.743939 | orchestrator | 2025-07-12 19:37:36.743948 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-12 19:37:36.743957 | orchestrator | Saturday 12 July 2025 19:37:18 +0000 (0:00:00.768) 0:00:44.103 ********* 2025-07-12 19:37:36.743966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:37:36.743976 | orchestrator | 2025-07-12 19:37:36.743985 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-12 19:37:36.743994 | orchestrator | Saturday 12 July 2025 19:37:19 +0000 (0:00:00.253) 0:00:44.357 ********* 2025-07-12 19:37:36.744003 | orchestrator | changed: [testbed-manager] 2025-07-12 19:37:36.744012 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:37:36.744020 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:37:36.744029 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:37:36.744038 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:37:36.744046 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:37:36.744055 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:37:36.744064 | orchestrator | 2025-07-12 19:37:36.744090 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-12 19:37:36.744099 | orchestrator | Saturday 12 July 2025 19:37:20 +0000 (0:00:01.001) 0:00:45.358 ********* 2025-07-12 19:37:36.744108 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:37:36.744117 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:37:36.744125 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:37:36.744134 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:37:36.744143 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:37:36.744151 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:37:36.744160 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:37:36.744168 | orchestrator | 2025-07-12 19:37:36.744177 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-12 19:37:36.744186 | orchestrator | Saturday 12 July 2025 19:37:20 +0000 (0:00:00.229) 0:00:45.587 ********* 2025-07-12 19:37:36.744195 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:37:36.744203 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:37:36.744212 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:37:36.744221 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:37:36.744229 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:37:36.744238 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:37:36.744247 | orchestrator | changed: [testbed-manager] 2025-07-12 19:37:36.744255 | orchestrator | 2025-07-12 19:37:36.744264 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-12 19:37:36.744273 | orchestrator | Saturday 12 July 2025 19:37:31 +0000 (0:00:11.241) 0:00:56.829 ********* 2025-07-12 19:37:36.744281 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.744291 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.744299 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.744346 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.744358 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.744367 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.744375 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.744384 | orchestrator | 2025-07-12 19:37:36.744393 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-12 19:37:36.744410 | orchestrator | Saturday 12 July 2025 19:37:32 +0000 (0:00:00.938) 0:00:57.768 ********* 2025-07-12 19:37:36.744418 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.744427 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.744436 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.744444 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.744453 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.744461 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.744470 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.744478 | orchestrator | 2025-07-12 19:37:36.744487 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-12 19:37:36.744496 | orchestrator | Saturday 12 July 2025 19:37:33 +0000 (0:00:00.908) 0:00:58.676 ********* 2025-07-12 19:37:36.744505 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.744513 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.744522 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.744530 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.744539 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.744547 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.744556 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.744565 | orchestrator | 2025-07-12 19:37:36.744574 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-12 19:37:36.744583 | orchestrator | Saturday 12 July 2025 19:37:33 +0000 (0:00:00.257) 0:00:58.934 ********* 2025-07-12 19:37:36.744619 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.744628 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.744637 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.744646 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.744654 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.744663 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.744671 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.744680 | orchestrator | 2025-07-12 19:37:36.744688 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-12 19:37:36.744697 | orchestrator | Saturday 12 July 2025 19:37:34 +0000 (0:00:00.250) 0:00:59.184 ********* 2025-07-12 19:37:36.744706 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:37:36.744715 | orchestrator | 2025-07-12 19:37:36.744725 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-12 19:37:36.744734 | orchestrator | Saturday 12 July 2025 19:37:34 +0000 (0:00:00.308) 0:00:59.492 ********* 2025-07-12 19:37:36.744743 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.744752 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.744761 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.744769 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.744778 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.744787 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.744795 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.744804 | orchestrator | 2025-07-12 19:37:36.744813 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-12 19:37:36.744836 | orchestrator | Saturday 12 July 2025 19:37:35 +0000 (0:00:01.549) 0:01:01.042 ********* 2025-07-12 19:37:36.744845 | orchestrator | changed: [testbed-manager] 2025-07-12 19:37:36.744854 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:37:36.744863 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:37:36.744872 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:37:36.744881 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:37:36.744889 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:37:36.744898 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:37:36.744907 | orchestrator | 2025-07-12 19:37:36.744915 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-12 19:37:36.744924 | orchestrator | Saturday 12 July 2025 19:37:36 +0000 (0:00:00.575) 0:01:01.617 ********* 2025-07-12 19:37:36.744939 | orchestrator | ok: [testbed-manager] 2025-07-12 19:37:36.744948 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:37:36.744957 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:37:36.744966 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:37:36.744974 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:37:36.744983 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:37:36.744992 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:37:36.745000 | orchestrator | 2025-07-12 19:37:36.745016 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-12 19:39:50.378101 | orchestrator | Saturday 12 July 2025 19:37:36 +0000 (0:00:00.228) 0:01:01.846 ********* 2025-07-12 19:39:50.378240 | orchestrator | ok: [testbed-manager] 2025-07-12 19:39:50.378273 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:39:50.378295 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:39:50.378316 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:39:50.378335 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:39:50.378356 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:39:50.378376 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:39:50.378397 | orchestrator | 2025-07-12 19:39:50.378418 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-12 19:39:50.378440 | orchestrator | Saturday 12 July 2025 19:37:37 +0000 (0:00:01.091) 0:01:02.938 ********* 2025-07-12 19:39:50.378460 | orchestrator | changed: [testbed-manager] 2025-07-12 19:39:50.378482 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:39:50.378503 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:39:50.378523 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:39:50.378544 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:39:50.378585 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:39:50.378607 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:39:50.378629 | orchestrator | 2025-07-12 19:39:50.378649 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-12 19:39:50.378671 | orchestrator | Saturday 12 July 2025 19:37:39 +0000 (0:00:01.642) 0:01:04.580 ********* 2025-07-12 19:39:50.378692 | orchestrator | ok: [testbed-manager] 2025-07-12 19:39:50.378711 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:39:50.378732 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:39:50.378752 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:39:50.378773 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:39:50.378857 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:39:50.378883 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:39:50.378910 | orchestrator | 2025-07-12 19:39:50.378931 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-12 19:39:50.378951 | orchestrator | Saturday 12 July 2025 19:37:42 +0000 (0:00:02.588) 0:01:07.169 ********* 2025-07-12 19:39:50.378971 | orchestrator | ok: [testbed-manager] 2025-07-12 19:39:50.378990 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:39:50.379009 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:39:50.379028 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:39:50.379047 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:39:50.379066 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:39:50.379085 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:39:50.379104 | orchestrator | 2025-07-12 19:39:50.379123 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-12 19:39:50.379142 | orchestrator | Saturday 12 July 2025 19:38:18 +0000 (0:00:36.844) 0:01:44.013 ********* 2025-07-12 19:39:50.379162 | orchestrator | changed: [testbed-manager] 2025-07-12 19:39:50.379182 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:39:50.379203 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:39:50.379224 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:39:50.379244 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:39:50.379265 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:39:50.379286 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:39:50.379306 | orchestrator | 2025-07-12 19:39:50.379327 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-12 19:39:50.379378 | orchestrator | Saturday 12 July 2025 19:39:34 +0000 (0:01:15.693) 0:02:59.707 ********* 2025-07-12 19:39:50.379400 | orchestrator | ok: [testbed-manager] 2025-07-12 19:39:50.379420 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:39:50.379439 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:39:50.379456 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:39:50.379475 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:39:50.379494 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:39:50.379513 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:39:50.379531 | orchestrator | 2025-07-12 19:39:50.379551 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-12 19:39:50.379571 | orchestrator | Saturday 12 July 2025 19:39:36 +0000 (0:00:01.639) 0:03:01.346 ********* 2025-07-12 19:39:50.379591 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:39:50.379611 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:39:50.379631 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:39:50.379651 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:39:50.379670 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:39:50.379687 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:39:50.379708 | orchestrator | changed: [testbed-manager] 2025-07-12 19:39:50.379729 | orchestrator | 2025-07-12 19:39:50.379749 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-12 19:39:50.379768 | orchestrator | Saturday 12 July 2025 19:39:48 +0000 (0:00:11.956) 0:03:13.303 ********* 2025-07-12 19:39:50.379830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-12 19:39:50.379859 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-12 19:39:50.379917 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-12 19:39:50.379947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-12 19:39:50.379969 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-12 19:39:50.379989 | orchestrator | 2025-07-12 19:39:50.380010 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-12 19:39:50.380030 | orchestrator | Saturday 12 July 2025 19:39:48 +0000 (0:00:00.376) 0:03:13.680 ********* 2025-07-12 19:39:50.380048 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:39:50.380065 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:39:50.380099 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:39:50.380119 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:39:50.380138 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:39:50.380158 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:39:50.380177 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:39:50.380195 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:39:50.380213 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 19:39:50.380233 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 19:39:50.380251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 19:39:50.380268 | orchestrator | 2025-07-12 19:39:50.380287 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-12 19:39:50.380307 | orchestrator | Saturday 12 July 2025 19:39:50 +0000 (0:00:01.613) 0:03:15.293 ********* 2025-07-12 19:39:50.380321 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:39:50.380337 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:39:50.380356 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:39:50.380374 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:39:50.380394 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:39:50.380415 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:39:50.380434 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:39:50.380453 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:39:50.380474 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:39:50.380493 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:39:50.380513 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:39:50.380533 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:39:50.380544 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:39:50.380555 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:39:50.380566 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:39:50.380584 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:39:50.380602 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:39:50.380620 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:39:50.380638 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:39:50.380656 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:39:50.380672 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:39:50.380691 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:39:58.188464 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:39:58.188593 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:39:58.188609 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:39:58.188621 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:39:58.188633 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:39:58.188645 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:39:58.188656 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:39:58.188668 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:39:58.188678 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:39:58.188689 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:39:58.188700 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:39:58.188711 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:39:58.188722 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:39:58.188733 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:39:58.188744 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:39:58.188755 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:39:58.188765 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:39:58.188776 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:39:58.188821 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:39:58.188834 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:39:58.188845 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:39:58.188856 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:39:58.188867 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 19:39:58.188878 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 19:39:58.188889 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 19:39:58.188900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 19:39:58.188911 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 19:39:58.188923 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 19:39:58.188934 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 19:39:58.188945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 19:39:58.188956 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 19:39:58.188967 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 19:39:58.188978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 19:39:58.188989 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 19:39:58.189010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 19:39:58.189022 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 19:39:58.189034 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 19:39:58.189047 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 19:39:58.189058 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 19:39:58.189070 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 19:39:58.189083 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 19:39:58.189096 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 19:39:58.189108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 19:39:58.189139 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 19:39:58.189152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 19:39:58.189165 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 19:39:58.189177 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 19:39:58.189190 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 19:39:58.189202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 19:39:58.189214 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 19:39:58.189226 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 19:39:58.189238 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 19:39:58.189250 | orchestrator | 2025-07-12 19:39:58.189264 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-12 19:39:58.189276 | orchestrator | Saturday 12 July 2025 19:39:55 +0000 (0:00:05.811) 0:03:21.105 ********* 2025-07-12 19:39:58.189289 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189302 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189314 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189326 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189339 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189369 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189380 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:39:58.189391 | orchestrator | 2025-07-12 19:39:58.189402 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-12 19:39:58.189413 | orchestrator | Saturday 12 July 2025 19:39:56 +0000 (0:00:00.653) 0:03:21.758 ********* 2025-07-12 19:39:58.189424 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:39:58.189435 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:39:58.189446 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:39:58.189457 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:39:58.189468 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:39:58.189490 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:39:58.189505 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:39:58.189516 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:39:58.189527 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 19:39:58.189538 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 19:39:58.189549 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 19:39:58.189560 | orchestrator | 2025-07-12 19:39:58.189571 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-12 19:39:58.189582 | orchestrator | Saturday 12 July 2025 19:39:57 +0000 (0:00:00.603) 0:03:22.361 ********* 2025-07-12 19:39:58.189598 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:39:58.189610 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:39:58.189621 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:39:58.189632 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:39:58.189642 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:39:58.189653 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:39:58.189664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:39:58.189675 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:39:58.189686 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 19:39:58.189697 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 19:39:58.189707 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 19:39:58.189718 | orchestrator | 2025-07-12 19:39:58.189729 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-12 19:39:58.189825 | orchestrator | Saturday 12 July 2025 19:39:57 +0000 (0:00:00.644) 0:03:23.006 ********* 2025-07-12 19:39:58.189843 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:39:58.189854 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:39:58.189865 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:39:58.189876 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:39:58.189887 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:39:58.189906 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:40:09.885253 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:40:09.885368 | orchestrator | 2025-07-12 19:40:09.885391 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-12 19:40:09.885403 | orchestrator | Saturday 12 July 2025 19:39:58 +0000 (0:00:00.292) 0:03:23.298 ********* 2025-07-12 19:40:09.885412 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:09.885422 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:09.885431 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:09.885440 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:09.885449 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:09.885457 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:09.885466 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:09.885475 | orchestrator | 2025-07-12 19:40:09.885485 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-12 19:40:09.885494 | orchestrator | Saturday 12 July 2025 19:40:03 +0000 (0:00:05.496) 0:03:28.795 ********* 2025-07-12 19:40:09.885503 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-12 19:40:09.885512 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:40:09.885521 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-12 19:40:09.885530 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-12 19:40:09.885563 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:40:09.885572 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-12 19:40:09.885581 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:40:09.885589 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-12 19:40:09.885598 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:40:09.885607 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-12 19:40:09.885616 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:40:09.885624 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:40:09.885633 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-12 19:40:09.885642 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:40:09.885651 | orchestrator | 2025-07-12 19:40:09.885660 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-12 19:40:09.885668 | orchestrator | Saturday 12 July 2025 19:40:03 +0000 (0:00:00.295) 0:03:29.090 ********* 2025-07-12 19:40:09.885677 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-12 19:40:09.885689 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-12 19:40:09.885699 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-12 19:40:09.885707 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-12 19:40:09.885716 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-12 19:40:09.885725 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-12 19:40:09.885733 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-12 19:40:09.885742 | orchestrator | 2025-07-12 19:40:09.885751 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-12 19:40:09.885760 | orchestrator | Saturday 12 July 2025 19:40:05 +0000 (0:00:01.111) 0:03:30.202 ********* 2025-07-12 19:40:09.885770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:40:09.885810 | orchestrator | 2025-07-12 19:40:09.885822 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-12 19:40:09.885833 | orchestrator | Saturday 12 July 2025 19:40:05 +0000 (0:00:00.464) 0:03:30.666 ********* 2025-07-12 19:40:09.885843 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:09.885853 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:09.885864 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:09.885873 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:09.885883 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:09.885893 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:09.885904 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:09.885914 | orchestrator | 2025-07-12 19:40:09.885924 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-12 19:40:09.885934 | orchestrator | Saturday 12 July 2025 19:40:06 +0000 (0:00:01.264) 0:03:31.931 ********* 2025-07-12 19:40:09.885945 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:09.885955 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:09.885965 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:09.885976 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:09.885998 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:09.886007 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:09.886066 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:09.886078 | orchestrator | 2025-07-12 19:40:09.886087 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-12 19:40:09.886095 | orchestrator | Saturday 12 July 2025 19:40:07 +0000 (0:00:00.627) 0:03:32.559 ********* 2025-07-12 19:40:09.886104 | orchestrator | changed: [testbed-manager] 2025-07-12 19:40:09.886113 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:09.886122 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:09.886131 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:40:09.886139 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:40:09.886148 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:40:09.886157 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:40:09.886174 | orchestrator | 2025-07-12 19:40:09.886182 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-12 19:40:09.886191 | orchestrator | Saturday 12 July 2025 19:40:08 +0000 (0:00:00.641) 0:03:33.200 ********* 2025-07-12 19:40:09.886200 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:09.886209 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:09.886217 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:09.886226 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:09.886235 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:09.886243 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:09.886252 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:09.886260 | orchestrator | 2025-07-12 19:40:09.886269 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-12 19:40:09.886278 | orchestrator | Saturday 12 July 2025 19:40:08 +0000 (0:00:00.626) 0:03:33.827 ********* 2025-07-12 19:40:09.886308 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347849.4377747, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886321 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347861.3132567, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886331 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347861.7289016, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886340 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347856.201222, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886349 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347860.6160836, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886363 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347861.5830157, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886378 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1752347865.4596233, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:09.886403 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660615 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660694 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660704 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660712 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660720 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660741 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 19:40:34.660748 | orchestrator | 2025-07-12 19:40:34.660756 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-12 19:40:34.660764 | orchestrator | Saturday 12 July 2025 19:40:09 +0000 (0:00:01.159) 0:03:34.986 ********* 2025-07-12 19:40:34.660771 | orchestrator | changed: [testbed-manager] 2025-07-12 19:40:34.660799 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:34.660805 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:34.660811 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:40:34.660818 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:40:34.660824 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:40:34.660830 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:40:34.660836 | orchestrator | 2025-07-12 19:40:34.660850 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-12 19:40:34.660857 | orchestrator | Saturday 12 July 2025 19:40:10 +0000 (0:00:01.112) 0:03:36.099 ********* 2025-07-12 19:40:34.660864 | orchestrator | changed: [testbed-manager] 2025-07-12 19:40:34.660870 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:34.660876 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:34.660882 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:40:34.660900 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:40:34.660907 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:40:34.660913 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:40:34.660919 | orchestrator | 2025-07-12 19:40:34.660925 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-12 19:40:34.660932 | orchestrator | Saturday 12 July 2025 19:40:12 +0000 (0:00:01.155) 0:03:37.254 ********* 2025-07-12 19:40:34.660938 | orchestrator | changed: [testbed-manager] 2025-07-12 19:40:34.660944 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:34.660950 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:34.660956 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:40:34.660962 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:40:34.660968 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:40:34.660975 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:40:34.660981 | orchestrator | 2025-07-12 19:40:34.660987 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-12 19:40:34.660993 | orchestrator | Saturday 12 July 2025 19:40:13 +0000 (0:00:01.170) 0:03:38.425 ********* 2025-07-12 19:40:34.660999 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:40:34.661005 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:40:34.661012 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:40:34.661018 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:40:34.661024 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:40:34.661030 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:40:34.661036 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:40:34.661042 | orchestrator | 2025-07-12 19:40:34.661048 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-12 19:40:34.661054 | orchestrator | Saturday 12 July 2025 19:40:13 +0000 (0:00:00.275) 0:03:38.701 ********* 2025-07-12 19:40:34.661065 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:34.661073 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:34.661079 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:34.661085 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:34.661091 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:34.661097 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:34.661104 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:34.661110 | orchestrator | 2025-07-12 19:40:34.661116 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-12 19:40:34.661122 | orchestrator | Saturday 12 July 2025 19:40:14 +0000 (0:00:00.739) 0:03:39.440 ********* 2025-07-12 19:40:34.661130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:40:34.661138 | orchestrator | 2025-07-12 19:40:34.661145 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-12 19:40:34.661151 | orchestrator | Saturday 12 July 2025 19:40:14 +0000 (0:00:00.376) 0:03:39.817 ********* 2025-07-12 19:40:34.661157 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:34.661163 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:40:34.661169 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:40:34.661176 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:40:34.661182 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:40:34.661188 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:34.661195 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:34.661202 | orchestrator | 2025-07-12 19:40:34.661209 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-12 19:40:34.661216 | orchestrator | Saturday 12 July 2025 19:40:23 +0000 (0:00:08.342) 0:03:48.160 ********* 2025-07-12 19:40:34.661223 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:34.661230 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:34.661237 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:34.661244 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:34.661251 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:34.661258 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:34.661265 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:34.661272 | orchestrator | 2025-07-12 19:40:34.661279 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-12 19:40:34.661294 | orchestrator | Saturday 12 July 2025 19:40:24 +0000 (0:00:01.257) 0:03:49.417 ********* 2025-07-12 19:40:34.661302 | orchestrator | ok: [testbed-manager] 2025-07-12 19:40:34.661309 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:40:34.661315 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:40:34.661322 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:40:34.661329 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:40:34.661336 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:40:34.661343 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:40:34.661350 | orchestrator | 2025-07-12 19:40:34.661357 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-12 19:40:34.661364 | orchestrator | Saturday 12 July 2025 19:40:25 +0000 (0:00:00.981) 0:03:50.399 ********* 2025-07-12 19:40:34.661371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:40:34.661378 | orchestrator | 2025-07-12 19:40:34.661385 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-12 19:40:34.661393 | orchestrator | Saturday 12 July 2025 19:40:25 +0000 (0:00:00.476) 0:03:50.875 ********* 2025-07-12 19:40:34.661399 | orchestrator | changed: [testbed-manager] 2025-07-12 19:40:34.661406 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:40:34.661413 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:34.661420 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:40:34.661427 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:40:34.661438 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:40:34.661445 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:34.661452 | orchestrator | 2025-07-12 19:40:34.661458 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-12 19:40:34.661465 | orchestrator | Saturday 12 July 2025 19:40:34 +0000 (0:00:08.277) 0:03:59.152 ********* 2025-07-12 19:40:34.661472 | orchestrator | changed: [testbed-manager] 2025-07-12 19:40:34.661479 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:40:34.661486 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:40:34.661497 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.628196 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.628305 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.628319 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.628331 | orchestrator | 2025-07-12 19:41:42.628344 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-12 19:41:42.628358 | orchestrator | Saturday 12 July 2025 19:40:34 +0000 (0:00:00.609) 0:03:59.762 ********* 2025-07-12 19:41:42.628369 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:42.628380 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:42.628391 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:42.628402 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.628413 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.628424 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.628434 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.628445 | orchestrator | 2025-07-12 19:41:42.628456 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-12 19:41:42.628468 | orchestrator | Saturday 12 July 2025 19:40:35 +0000 (0:00:01.150) 0:04:00.912 ********* 2025-07-12 19:41:42.628479 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:42.628490 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:42.628500 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.628511 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.628522 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.628533 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.628544 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:42.628554 | orchestrator | 2025-07-12 19:41:42.628565 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-12 19:41:42.628577 | orchestrator | Saturday 12 July 2025 19:40:37 +0000 (0:00:01.793) 0:04:02.706 ********* 2025-07-12 19:41:42.628587 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:42.628599 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:42.628611 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:42.628622 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:42.628633 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:42.628644 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:42.628655 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:42.628666 | orchestrator | 2025-07-12 19:41:42.628677 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-12 19:41:42.628689 | orchestrator | Saturday 12 July 2025 19:40:37 +0000 (0:00:00.285) 0:04:02.991 ********* 2025-07-12 19:41:42.628700 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:42.628711 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:42.628722 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:42.628733 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:42.628746 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:42.628760 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:42.628802 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:42.628814 | orchestrator | 2025-07-12 19:41:42.628827 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-12 19:41:42.628839 | orchestrator | Saturday 12 July 2025 19:40:38 +0000 (0:00:00.285) 0:04:03.277 ********* 2025-07-12 19:41:42.628852 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:42.628864 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:42.628876 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:42.628932 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:42.628945 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:42.628957 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:42.628968 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:42.628981 | orchestrator | 2025-07-12 19:41:42.628993 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-12 19:41:42.629006 | orchestrator | Saturday 12 July 2025 19:40:38 +0000 (0:00:00.335) 0:04:03.613 ********* 2025-07-12 19:41:42.629018 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:42.629030 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:42.629042 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:42.629054 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:42.629067 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:42.629079 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:42.629090 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:42.629101 | orchestrator | 2025-07-12 19:41:42.629112 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-12 19:41:42.629136 | orchestrator | Saturday 12 July 2025 19:40:43 +0000 (0:00:05.493) 0:04:09.106 ********* 2025-07-12 19:41:42.629159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:41:42.629174 | orchestrator | 2025-07-12 19:41:42.629185 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-12 19:41:42.629196 | orchestrator | Saturday 12 July 2025 19:40:44 +0000 (0:00:00.394) 0:04:09.500 ********* 2025-07-12 19:41:42.629207 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629218 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-12 19:41:42.629229 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:42.629240 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629251 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-12 19:41:42.629262 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629273 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-12 19:41:42.629283 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:42.629294 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629305 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-12 19:41:42.629316 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:42.629326 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629337 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:42.629348 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-12 19:41:42.629359 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629370 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-12 19:41:42.629381 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:42.629409 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:42.629420 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-12 19:41:42.629431 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-12 19:41:42.629442 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:42.629452 | orchestrator | 2025-07-12 19:41:42.629463 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-12 19:41:42.629474 | orchestrator | Saturday 12 July 2025 19:40:44 +0000 (0:00:00.341) 0:04:09.842 ********* 2025-07-12 19:41:42.629485 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:41:42.629496 | orchestrator | 2025-07-12 19:41:42.629507 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-12 19:41:42.629526 | orchestrator | Saturday 12 July 2025 19:40:45 +0000 (0:00:00.391) 0:04:10.233 ********* 2025-07-12 19:41:42.629537 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-12 19:41:42.629548 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:42.629558 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-12 19:41:42.629569 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:42.629580 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-12 19:41:42.629591 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-12 19:41:42.629601 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:42.629612 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-12 19:41:42.629623 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:42.629633 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-12 19:41:42.629644 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:42.629655 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:42.629665 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-12 19:41:42.629676 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:42.629687 | orchestrator | 2025-07-12 19:41:42.629698 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-12 19:41:42.629708 | orchestrator | Saturday 12 July 2025 19:40:45 +0000 (0:00:00.346) 0:04:10.580 ********* 2025-07-12 19:41:42.629720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:41:42.629731 | orchestrator | 2025-07-12 19:41:42.629742 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-12 19:41:42.629753 | orchestrator | Saturday 12 July 2025 19:40:45 +0000 (0:00:00.517) 0:04:11.098 ********* 2025-07-12 19:41:42.629763 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:42.629798 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.629809 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.629820 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.629831 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.629841 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:42.629852 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:42.629863 | orchestrator | 2025-07-12 19:41:42.629874 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-12 19:41:42.629885 | orchestrator | Saturday 12 July 2025 19:41:19 +0000 (0:00:33.620) 0:04:44.719 ********* 2025-07-12 19:41:42.629895 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:42.629906 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:42.629917 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.629927 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.629943 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.629954 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:42.629965 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.629976 | orchestrator | 2025-07-12 19:41:42.629986 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-12 19:41:42.630013 | orchestrator | Saturday 12 July 2025 19:41:27 +0000 (0:00:07.635) 0:04:52.355 ********* 2025-07-12 19:41:42.630073 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:42.630084 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:42.630095 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.630106 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.630117 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.630128 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:42.630139 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.630150 | orchestrator | 2025-07-12 19:41:42.630161 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-12 19:41:42.630180 | orchestrator | Saturday 12 July 2025 19:41:34 +0000 (0:00:07.441) 0:04:59.796 ********* 2025-07-12 19:41:42.630191 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:42.630201 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:42.630212 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:42.630223 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:42.630234 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:42.630245 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:42.630255 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:42.630266 | orchestrator | 2025-07-12 19:41:42.630277 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-12 19:41:42.630288 | orchestrator | Saturday 12 July 2025 19:41:36 +0000 (0:00:01.812) 0:05:01.609 ********* 2025-07-12 19:41:42.630299 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:42.630309 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:42.630320 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:42.630331 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:42.630342 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:42.630352 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:42.630363 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:42.630374 | orchestrator | 2025-07-12 19:41:42.630385 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-12 19:41:42.630404 | orchestrator | Saturday 12 July 2025 19:41:42 +0000 (0:00:06.116) 0:05:07.726 ********* 2025-07-12 19:41:53.556150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:41:53.556279 | orchestrator | 2025-07-12 19:41:53.556297 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-12 19:41:53.556312 | orchestrator | Saturday 12 July 2025 19:41:42 +0000 (0:00:00.390) 0:05:08.116 ********* 2025-07-12 19:41:53.556323 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:53.556335 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:53.556347 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:53.556358 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:53.556369 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:53.556380 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:53.556391 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:53.556402 | orchestrator | 2025-07-12 19:41:53.556413 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-12 19:41:53.556425 | orchestrator | Saturday 12 July 2025 19:41:43 +0000 (0:00:00.713) 0:05:08.829 ********* 2025-07-12 19:41:53.556435 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:53.556447 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:53.556458 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:53.556469 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:53.556480 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:53.556491 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:53.556502 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:53.556513 | orchestrator | 2025-07-12 19:41:53.556524 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-12 19:41:53.556536 | orchestrator | Saturday 12 July 2025 19:41:45 +0000 (0:00:01.648) 0:05:10.478 ********* 2025-07-12 19:41:53.556547 | orchestrator | changed: [testbed-manager] 2025-07-12 19:41:53.556558 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:41:53.556569 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:41:53.556580 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:41:53.556591 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:41:53.556602 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:41:53.556613 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:41:53.556624 | orchestrator | 2025-07-12 19:41:53.556635 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-12 19:41:53.556646 | orchestrator | Saturday 12 July 2025 19:41:46 +0000 (0:00:00.788) 0:05:11.267 ********* 2025-07-12 19:41:53.556683 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:53.556695 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:53.556708 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:53.556721 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:53.556733 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:53.556745 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:53.556758 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:53.556800 | orchestrator | 2025-07-12 19:41:53.556819 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-12 19:41:53.556840 | orchestrator | Saturday 12 July 2025 19:41:46 +0000 (0:00:00.283) 0:05:11.550 ********* 2025-07-12 19:41:53.556859 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:53.556875 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:53.556888 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:53.556900 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:53.556912 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:53.556924 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:53.556937 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:53.556949 | orchestrator | 2025-07-12 19:41:53.556961 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-12 19:41:53.556973 | orchestrator | Saturday 12 July 2025 19:41:46 +0000 (0:00:00.368) 0:05:11.918 ********* 2025-07-12 19:41:53.556986 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:53.557013 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:53.557025 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:53.557038 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:53.557052 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:53.557072 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:53.557093 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:53.557111 | orchestrator | 2025-07-12 19:41:53.557123 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-12 19:41:53.557134 | orchestrator | Saturday 12 July 2025 19:41:47 +0000 (0:00:00.290) 0:05:12.208 ********* 2025-07-12 19:41:53.557144 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:53.557155 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:53.557166 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:53.557177 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:53.557187 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:53.557198 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:53.557209 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:53.557219 | orchestrator | 2025-07-12 19:41:53.557230 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-12 19:41:53.557242 | orchestrator | Saturday 12 July 2025 19:41:47 +0000 (0:00:00.262) 0:05:12.471 ********* 2025-07-12 19:41:53.557253 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:53.557264 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:53.557274 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:53.557285 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:53.557296 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:53.557307 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:53.557317 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:53.557328 | orchestrator | 2025-07-12 19:41:53.557339 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-12 19:41:53.557350 | orchestrator | Saturday 12 July 2025 19:41:47 +0000 (0:00:00.319) 0:05:12.791 ********* 2025-07-12 19:41:53.557361 | orchestrator | ok: [testbed-manager] =>  2025-07-12 19:41:53.557372 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557382 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 19:41:53.557393 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557404 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 19:41:53.557415 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557425 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 19:41:53.557436 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557474 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 19:41:53.557486 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557515 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 19:41:53.557527 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557538 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 19:41:53.557549 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 19:41:53.557560 | orchestrator | 2025-07-12 19:41:53.557571 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-12 19:41:53.557589 | orchestrator | Saturday 12 July 2025 19:41:47 +0000 (0:00:00.265) 0:05:13.057 ********* 2025-07-12 19:41:53.557607 | orchestrator | ok: [testbed-manager] =>  2025-07-12 19:41:53.557626 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557643 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 19:41:53.557661 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557678 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 19:41:53.557696 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557713 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 19:41:53.557731 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557750 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 19:41:53.557792 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557811 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 19:41:53.557826 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557837 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 19:41:53.557848 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 19:41:53.557859 | orchestrator | 2025-07-12 19:41:53.557870 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-12 19:41:53.557881 | orchestrator | Saturday 12 July 2025 19:41:48 +0000 (0:00:00.417) 0:05:13.474 ********* 2025-07-12 19:41:53.557891 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:53.557902 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:53.557912 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:53.557923 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:53.557934 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:53.557944 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:53.557955 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:53.557966 | orchestrator | 2025-07-12 19:41:53.557977 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-12 19:41:53.557987 | orchestrator | Saturday 12 July 2025 19:41:48 +0000 (0:00:00.269) 0:05:13.744 ********* 2025-07-12 19:41:53.557998 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:53.558009 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:53.558072 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:53.558087 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:53.558098 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:41:53.558108 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:41:53.558119 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:41:53.558130 | orchestrator | 2025-07-12 19:41:53.558141 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-12 19:41:53.558153 | orchestrator | Saturday 12 July 2025 19:41:48 +0000 (0:00:00.310) 0:05:14.055 ********* 2025-07-12 19:41:53.558166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:41:53.558182 | orchestrator | 2025-07-12 19:41:53.558201 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-12 19:41:53.558218 | orchestrator | Saturday 12 July 2025 19:41:49 +0000 (0:00:00.398) 0:05:14.453 ********* 2025-07-12 19:41:53.558236 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:53.558252 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:53.558268 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:53.558285 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:53.558316 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:53.558334 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:53.558351 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:53.558369 | orchestrator | 2025-07-12 19:41:53.558387 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-12 19:41:53.558416 | orchestrator | Saturday 12 July 2025 19:41:50 +0000 (0:00:00.819) 0:05:15.273 ********* 2025-07-12 19:41:53.558435 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:41:53.558452 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:41:53.558470 | orchestrator | ok: [testbed-manager] 2025-07-12 19:41:53.558489 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:41:53.558503 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:41:53.558514 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:41:53.558524 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:41:53.558535 | orchestrator | 2025-07-12 19:41:53.558546 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-12 19:41:53.558558 | orchestrator | Saturday 12 July 2025 19:41:52 +0000 (0:00:02.788) 0:05:18.062 ********* 2025-07-12 19:41:53.558569 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-12 19:41:53.558580 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-12 19:41:53.558591 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-12 19:41:53.558601 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-12 19:41:53.558612 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-12 19:41:53.558623 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-12 19:41:53.558633 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:41:53.558644 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-12 19:41:53.558655 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-12 19:41:53.558665 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-12 19:41:53.558676 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:41:53.558687 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-12 19:41:53.558697 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-12 19:41:53.558708 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-12 19:41:53.558718 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:41:53.558729 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-12 19:41:53.558740 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-12 19:41:53.558751 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:41:53.558799 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-12 19:42:53.654905 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-12 19:42:53.655023 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-12 19:42:53.655037 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-12 19:42:53.655049 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:42:53.655061 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:42:53.655073 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-12 19:42:53.655084 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-12 19:42:53.655095 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-12 19:42:53.655106 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:42:53.655118 | orchestrator | 2025-07-12 19:42:53.655130 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-12 19:42:53.655143 | orchestrator | Saturday 12 July 2025 19:41:53 +0000 (0:00:00.736) 0:05:18.799 ********* 2025-07-12 19:42:53.655154 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.655165 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.655176 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.655187 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.655198 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.655209 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.655245 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.655257 | orchestrator | 2025-07-12 19:42:53.655268 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-12 19:42:53.655279 | orchestrator | Saturday 12 July 2025 19:41:59 +0000 (0:00:06.196) 0:05:24.995 ********* 2025-07-12 19:42:53.655290 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.655301 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.655311 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.655322 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.655333 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.655344 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.655355 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.655365 | orchestrator | 2025-07-12 19:42:53.655376 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-12 19:42:53.655387 | orchestrator | Saturday 12 July 2025 19:42:00 +0000 (0:00:01.025) 0:05:26.021 ********* 2025-07-12 19:42:53.655399 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.655412 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.655424 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.655437 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.655449 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.655461 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.655474 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.655486 | orchestrator | 2025-07-12 19:42:53.655498 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-12 19:42:53.655511 | orchestrator | Saturday 12 July 2025 19:42:08 +0000 (0:00:07.520) 0:05:33.542 ********* 2025-07-12 19:42:53.655523 | orchestrator | changed: [testbed-manager] 2025-07-12 19:42:53.655535 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.655547 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.655559 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.655573 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.655585 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.655597 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.655607 | orchestrator | 2025-07-12 19:42:53.655618 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-12 19:42:53.655629 | orchestrator | Saturday 12 July 2025 19:42:11 +0000 (0:00:03.114) 0:05:36.657 ********* 2025-07-12 19:42:53.655640 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.655651 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.655662 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.655673 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.655684 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.655694 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.655720 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.655732 | orchestrator | 2025-07-12 19:42:53.655743 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-12 19:42:53.655754 | orchestrator | Saturday 12 July 2025 19:42:13 +0000 (0:00:01.536) 0:05:38.193 ********* 2025-07-12 19:42:53.655788 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.655799 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.655810 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.655820 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.655831 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.655842 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.655853 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.655864 | orchestrator | 2025-07-12 19:42:53.655874 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-12 19:42:53.655886 | orchestrator | Saturday 12 July 2025 19:42:14 +0000 (0:00:01.297) 0:05:39.491 ********* 2025-07-12 19:42:53.655896 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:42:53.655907 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:42:53.655918 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:42:53.655937 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:42:53.655948 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:42:53.655959 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:42:53.655970 | orchestrator | changed: [testbed-manager] 2025-07-12 19:42:53.655981 | orchestrator | 2025-07-12 19:42:53.655992 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-12 19:42:53.656003 | orchestrator | Saturday 12 July 2025 19:42:14 +0000 (0:00:00.606) 0:05:40.098 ********* 2025-07-12 19:42:53.656014 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.656025 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.656035 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.656046 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.656057 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.656068 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.656079 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.656090 | orchestrator | 2025-07-12 19:42:53.656101 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-12 19:42:53.656112 | orchestrator | Saturday 12 July 2025 19:42:25 +0000 (0:00:10.409) 0:05:50.507 ********* 2025-07-12 19:42:53.656123 | orchestrator | changed: [testbed-manager] 2025-07-12 19:42:53.656134 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.656161 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.656173 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.656184 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.656195 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.656206 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.656217 | orchestrator | 2025-07-12 19:42:53.656228 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-12 19:42:53.656239 | orchestrator | Saturday 12 July 2025 19:42:26 +0000 (0:00:00.955) 0:05:51.463 ********* 2025-07-12 19:42:53.656250 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.656261 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.656272 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.656283 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.656294 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.656305 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.656316 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.656326 | orchestrator | 2025-07-12 19:42:53.656337 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-12 19:42:53.656348 | orchestrator | Saturday 12 July 2025 19:42:35 +0000 (0:00:08.963) 0:06:00.427 ********* 2025-07-12 19:42:53.656359 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.656370 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.656381 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.656392 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.656402 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.656413 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.656424 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.656435 | orchestrator | 2025-07-12 19:42:53.656446 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-12 19:42:53.656457 | orchestrator | Saturday 12 July 2025 19:42:47 +0000 (0:00:11.752) 0:06:12.179 ********* 2025-07-12 19:42:53.656468 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-12 19:42:53.656479 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-12 19:42:53.656490 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-12 19:42:53.656501 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-12 19:42:53.656512 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-12 19:42:53.656523 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-12 19:42:53.656534 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-12 19:42:53.656544 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-12 19:42:53.656555 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-12 19:42:53.656573 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-12 19:42:53.656584 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-12 19:42:53.656595 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-12 19:42:53.656606 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-12 19:42:53.656617 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-12 19:42:53.656628 | orchestrator | 2025-07-12 19:42:53.656639 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-12 19:42:53.656650 | orchestrator | Saturday 12 July 2025 19:42:48 +0000 (0:00:01.187) 0:06:13.366 ********* 2025-07-12 19:42:53.656661 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:42:53.656672 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:42:53.656683 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:42:53.656694 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:42:53.656705 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:42:53.656716 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:42:53.656727 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:42:53.656737 | orchestrator | 2025-07-12 19:42:53.656749 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-12 19:42:53.656788 | orchestrator | Saturday 12 July 2025 19:42:48 +0000 (0:00:00.505) 0:06:13.872 ********* 2025-07-12 19:42:53.656807 | orchestrator | ok: [testbed-manager] 2025-07-12 19:42:53.656819 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:42:53.656830 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:42:53.656841 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:42:53.656852 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:42:53.656863 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:42:53.656874 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:42:53.656885 | orchestrator | 2025-07-12 19:42:53.656896 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-12 19:42:53.656908 | orchestrator | Saturday 12 July 2025 19:42:52 +0000 (0:00:04.043) 0:06:17.916 ********* 2025-07-12 19:42:53.656919 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:42:53.656930 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:42:53.656941 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:42:53.656952 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:42:53.656963 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:42:53.656974 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:42:53.656985 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:42:53.656996 | orchestrator | 2025-07-12 19:42:53.657007 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-12 19:42:53.657019 | orchestrator | Saturday 12 July 2025 19:42:53 +0000 (0:00:00.483) 0:06:18.400 ********* 2025-07-12 19:42:53.657030 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-12 19:42:53.657041 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-12 19:42:53.657052 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:42:53.657063 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-12 19:42:53.657074 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-12 19:42:53.657085 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:42:53.657096 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-12 19:42:53.657106 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-12 19:42:53.657117 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:42:53.657128 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-12 19:42:53.657139 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-12 19:42:53.657156 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:12.352538 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-12 19:43:12.352647 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-12 19:43:12.352686 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:12.352699 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-12 19:43:12.352710 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-12 19:43:12.352721 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:12.352732 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-12 19:43:12.352743 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-12 19:43:12.352754 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:12.352814 | orchestrator | 2025-07-12 19:43:12.352828 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-12 19:43:12.352840 | orchestrator | Saturday 12 July 2025 19:42:53 +0000 (0:00:00.564) 0:06:18.965 ********* 2025-07-12 19:43:12.352851 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:12.352862 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:12.352873 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:12.352884 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:12.352895 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:12.352906 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:12.352917 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:12.352928 | orchestrator | 2025-07-12 19:43:12.352940 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-12 19:43:12.352951 | orchestrator | Saturday 12 July 2025 19:42:54 +0000 (0:00:00.503) 0:06:19.469 ********* 2025-07-12 19:43:12.352962 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:12.352973 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:12.352984 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:12.352995 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:12.353005 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:12.353016 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:12.353027 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:12.353038 | orchestrator | 2025-07-12 19:43:12.353050 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-12 19:43:12.353063 | orchestrator | Saturday 12 July 2025 19:42:54 +0000 (0:00:00.522) 0:06:19.991 ********* 2025-07-12 19:43:12.353075 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:12.353087 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:12.353100 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:12.353112 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:12.353125 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:12.353137 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:12.353149 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:12.353160 | orchestrator | 2025-07-12 19:43:12.353173 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-12 19:43:12.353185 | orchestrator | Saturday 12 July 2025 19:42:55 +0000 (0:00:00.692) 0:06:20.684 ********* 2025-07-12 19:43:12.353198 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.353210 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:12.353222 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:12.353234 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:12.353246 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:12.353258 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:12.353270 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:12.353282 | orchestrator | 2025-07-12 19:43:12.353294 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-12 19:43:12.353307 | orchestrator | Saturday 12 July 2025 19:42:57 +0000 (0:00:01.637) 0:06:22.322 ********* 2025-07-12 19:43:12.353321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:43:12.353335 | orchestrator | 2025-07-12 19:43:12.353347 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-12 19:43:12.353368 | orchestrator | Saturday 12 July 2025 19:42:58 +0000 (0:00:00.823) 0:06:23.145 ********* 2025-07-12 19:43:12.353381 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.353393 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:12.353406 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:12.353417 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:12.353428 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:12.353439 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:12.353449 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:12.353460 | orchestrator | 2025-07-12 19:43:12.353471 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-12 19:43:12.353483 | orchestrator | Saturday 12 July 2025 19:42:58 +0000 (0:00:00.822) 0:06:23.968 ********* 2025-07-12 19:43:12.353494 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.353505 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:12.353515 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:12.353526 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:12.353537 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:12.353548 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:12.353559 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:12.353570 | orchestrator | 2025-07-12 19:43:12.353581 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-12 19:43:12.353592 | orchestrator | Saturday 12 July 2025 19:42:59 +0000 (0:00:01.058) 0:06:25.027 ********* 2025-07-12 19:43:12.353603 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.353614 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:12.353624 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:12.353635 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:12.353646 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:12.353657 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:12.353667 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:12.353678 | orchestrator | 2025-07-12 19:43:12.353689 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-12 19:43:12.353701 | orchestrator | Saturday 12 July 2025 19:43:01 +0000 (0:00:01.322) 0:06:26.349 ********* 2025-07-12 19:43:12.353712 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:12.353740 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:12.353752 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:12.353830 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:12.353843 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:12.353854 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:12.353866 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:12.353876 | orchestrator | 2025-07-12 19:43:12.353888 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-12 19:43:12.353899 | orchestrator | Saturday 12 July 2025 19:43:02 +0000 (0:00:01.340) 0:06:27.690 ********* 2025-07-12 19:43:12.353910 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.353921 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:12.353932 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:12.353943 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:12.353954 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:12.353965 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:12.353975 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:12.353986 | orchestrator | 2025-07-12 19:43:12.353997 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-12 19:43:12.354008 | orchestrator | Saturday 12 July 2025 19:43:03 +0000 (0:00:01.345) 0:06:29.035 ********* 2025-07-12 19:43:12.354078 | orchestrator | changed: [testbed-manager] 2025-07-12 19:43:12.354091 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:12.354102 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:12.354113 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:12.354125 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:12.354136 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:12.354147 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:12.354167 | orchestrator | 2025-07-12 19:43:12.354178 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-12 19:43:12.354189 | orchestrator | Saturday 12 July 2025 19:43:05 +0000 (0:00:01.594) 0:06:30.630 ********* 2025-07-12 19:43:12.354200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:43:12.354212 | orchestrator | 2025-07-12 19:43:12.354223 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-12 19:43:12.354234 | orchestrator | Saturday 12 July 2025 19:43:06 +0000 (0:00:00.833) 0:06:31.463 ********* 2025-07-12 19:43:12.354245 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.354256 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:12.354267 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:12.354278 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:12.354289 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:12.354300 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:12.354311 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:12.354322 | orchestrator | 2025-07-12 19:43:12.354333 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-12 19:43:12.354344 | orchestrator | Saturday 12 July 2025 19:43:07 +0000 (0:00:01.372) 0:06:32.835 ********* 2025-07-12 19:43:12.354355 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.354366 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:12.354377 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:12.354388 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:12.354407 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:12.354427 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:12.354447 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:12.354467 | orchestrator | 2025-07-12 19:43:12.354507 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-12 19:43:12.354520 | orchestrator | Saturday 12 July 2025 19:43:08 +0000 (0:00:01.069) 0:06:33.905 ********* 2025-07-12 19:43:12.354531 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.354542 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:12.354552 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:12.354563 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:12.354573 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:12.354584 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:12.354599 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:12.354610 | orchestrator | 2025-07-12 19:43:12.354621 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-12 19:43:12.354632 | orchestrator | Saturday 12 July 2025 19:43:10 +0000 (0:00:01.314) 0:06:35.219 ********* 2025-07-12 19:43:12.354643 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:12.354653 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:12.354664 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:12.354675 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:12.354685 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:12.354696 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:12.354706 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:12.354717 | orchestrator | 2025-07-12 19:43:12.354728 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-12 19:43:12.354739 | orchestrator | Saturday 12 July 2025 19:43:11 +0000 (0:00:01.111) 0:06:36.331 ********* 2025-07-12 19:43:12.354750 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:43:12.355025 | orchestrator | 2025-07-12 19:43:12.355041 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:12.355053 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.827) 0:06:37.159 ********* 2025-07-12 19:43:12.355064 | orchestrator | 2025-07-12 19:43:12.355074 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:12.355096 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.039) 0:06:37.199 ********* 2025-07-12 19:43:12.355107 | orchestrator | 2025-07-12 19:43:12.355118 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:12.355129 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.043) 0:06:37.242 ********* 2025-07-12 19:43:12.355140 | orchestrator | 2025-07-12 19:43:12.355151 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:12.355162 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.037) 0:06:37.280 ********* 2025-07-12 19:43:12.355173 | orchestrator | 2025-07-12 19:43:12.355184 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:12.355206 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.037) 0:06:37.317 ********* 2025-07-12 19:43:37.937814 | orchestrator | 2025-07-12 19:43:37.937921 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:37.937938 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.043) 0:06:37.361 ********* 2025-07-12 19:43:37.937950 | orchestrator | 2025-07-12 19:43:37.937961 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 19:43:37.937972 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.038) 0:06:37.400 ********* 2025-07-12 19:43:37.937983 | orchestrator | 2025-07-12 19:43:37.937994 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 19:43:37.938005 | orchestrator | Saturday 12 July 2025 19:43:12 +0000 (0:00:00.038) 0:06:37.438 ********* 2025-07-12 19:43:37.938054 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:37.938069 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:37.938081 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:37.938091 | orchestrator | 2025-07-12 19:43:37.938103 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-12 19:43:37.938114 | orchestrator | Saturday 12 July 2025 19:43:13 +0000 (0:00:01.356) 0:06:38.794 ********* 2025-07-12 19:43:37.938125 | orchestrator | changed: [testbed-manager] 2025-07-12 19:43:37.938137 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:37.938148 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:37.938159 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:37.938170 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:37.938181 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:37.938192 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:37.938203 | orchestrator | 2025-07-12 19:43:37.938214 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-12 19:43:37.938226 | orchestrator | Saturday 12 July 2025 19:43:15 +0000 (0:00:01.338) 0:06:40.133 ********* 2025-07-12 19:43:37.938237 | orchestrator | changed: [testbed-manager] 2025-07-12 19:43:37.938249 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:37.938262 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:37.938274 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:37.938286 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:37.938299 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:37.938310 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:37.938323 | orchestrator | 2025-07-12 19:43:37.938336 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-12 19:43:37.938348 | orchestrator | Saturday 12 July 2025 19:43:16 +0000 (0:00:01.117) 0:06:41.251 ********* 2025-07-12 19:43:37.938360 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:37.938372 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:37.938385 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:37.938397 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:37.938409 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:37.938421 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:37.938434 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:37.938446 | orchestrator | 2025-07-12 19:43:37.938458 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-12 19:43:37.938494 | orchestrator | Saturday 12 July 2025 19:43:18 +0000 (0:00:02.303) 0:06:43.555 ********* 2025-07-12 19:43:37.938506 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:37.938519 | orchestrator | 2025-07-12 19:43:37.938531 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-12 19:43:37.938544 | orchestrator | Saturday 12 July 2025 19:43:18 +0000 (0:00:00.099) 0:06:43.654 ********* 2025-07-12 19:43:37.938556 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:37.938568 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:37.938580 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:37.938593 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:37.938605 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:37.938617 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:37.938640 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:37.938651 | orchestrator | 2025-07-12 19:43:37.938662 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-12 19:43:37.938675 | orchestrator | Saturday 12 July 2025 19:43:19 +0000 (0:00:00.998) 0:06:44.653 ********* 2025-07-12 19:43:37.938686 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:37.938697 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:37.938707 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:37.938718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:37.938729 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:37.938739 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:37.938750 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:37.938780 | orchestrator | 2025-07-12 19:43:37.938792 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-12 19:43:37.938803 | orchestrator | Saturday 12 July 2025 19:43:20 +0000 (0:00:00.718) 0:06:45.371 ********* 2025-07-12 19:43:37.938815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:43:37.938829 | orchestrator | 2025-07-12 19:43:37.938840 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-12 19:43:37.938851 | orchestrator | Saturday 12 July 2025 19:43:21 +0000 (0:00:00.907) 0:06:46.278 ********* 2025-07-12 19:43:37.938862 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:37.938873 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:37.938883 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:37.938894 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:37.938905 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:37.938916 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:37.938927 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:37.938938 | orchestrator | 2025-07-12 19:43:37.938949 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-12 19:43:37.938960 | orchestrator | Saturday 12 July 2025 19:43:22 +0000 (0:00:00.842) 0:06:47.121 ********* 2025-07-12 19:43:37.938971 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-12 19:43:37.938982 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-12 19:43:37.938994 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-12 19:43:37.939023 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-12 19:43:37.939035 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-12 19:43:37.939046 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-12 19:43:37.939057 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-12 19:43:37.939068 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-12 19:43:37.939079 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-12 19:43:37.939090 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-12 19:43:37.939101 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-12 19:43:37.939125 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-12 19:43:37.939136 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-12 19:43:37.939147 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-12 19:43:37.939158 | orchestrator | 2025-07-12 19:43:37.939169 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-12 19:43:37.939180 | orchestrator | Saturday 12 July 2025 19:43:24 +0000 (0:00:02.692) 0:06:49.814 ********* 2025-07-12 19:43:37.939191 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:37.939201 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:37.939212 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:37.939223 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:37.939234 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:37.939245 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:37.939255 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:37.939266 | orchestrator | 2025-07-12 19:43:37.939277 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-12 19:43:37.939288 | orchestrator | Saturday 12 July 2025 19:43:25 +0000 (0:00:00.506) 0:06:50.321 ********* 2025-07-12 19:43:37.939302 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:43:37.939315 | orchestrator | 2025-07-12 19:43:37.939327 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-12 19:43:37.939338 | orchestrator | Saturday 12 July 2025 19:43:26 +0000 (0:00:00.820) 0:06:51.141 ********* 2025-07-12 19:43:37.939348 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:37.939360 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:37.939371 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:37.939381 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:37.939392 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:37.939403 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:37.939414 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:37.939425 | orchestrator | 2025-07-12 19:43:37.939435 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-12 19:43:37.939447 | orchestrator | Saturday 12 July 2025 19:43:27 +0000 (0:00:01.064) 0:06:52.205 ********* 2025-07-12 19:43:37.939457 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:37.939468 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:37.939479 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:37.939490 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:37.939501 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:37.939511 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:37.939522 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:37.939533 | orchestrator | 2025-07-12 19:43:37.939544 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-12 19:43:37.939555 | orchestrator | Saturday 12 July 2025 19:43:27 +0000 (0:00:00.839) 0:06:53.044 ********* 2025-07-12 19:43:37.939571 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:37.939582 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:37.939593 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:37.939604 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:37.939614 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:37.939625 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:37.939636 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:37.939647 | orchestrator | 2025-07-12 19:43:37.939658 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-12 19:43:37.939669 | orchestrator | Saturday 12 July 2025 19:43:28 +0000 (0:00:00.479) 0:06:53.524 ********* 2025-07-12 19:43:37.939680 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:37.939691 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:43:37.939702 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:43:37.939713 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:43:37.939731 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:43:37.939742 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:43:37.939753 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:43:37.939799 | orchestrator | 2025-07-12 19:43:37.939811 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-12 19:43:37.939822 | orchestrator | Saturday 12 July 2025 19:43:29 +0000 (0:00:01.394) 0:06:54.919 ********* 2025-07-12 19:43:37.939832 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:43:37.939843 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:43:37.939854 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:43:37.939865 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:43:37.939875 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:43:37.939886 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:43:37.939897 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:43:37.939908 | orchestrator | 2025-07-12 19:43:37.939919 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-12 19:43:37.939929 | orchestrator | Saturday 12 July 2025 19:43:30 +0000 (0:00:00.524) 0:06:55.444 ********* 2025-07-12 19:43:37.939940 | orchestrator | ok: [testbed-manager] 2025-07-12 19:43:37.939951 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:43:37.939962 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:43:37.939973 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:43:37.939983 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:43:37.939994 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:43:37.940004 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:43:37.940015 | orchestrator | 2025-07-12 19:43:37.940026 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-12 19:43:37.940044 | orchestrator | Saturday 12 July 2025 19:43:37 +0000 (0:00:07.589) 0:07:03.034 ********* 2025-07-12 19:44:11.279597 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.279708 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:11.279725 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:11.279737 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:11.279748 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:11.279789 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:11.279803 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:11.279815 | orchestrator | 2025-07-12 19:44:11.279827 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-12 19:44:11.279840 | orchestrator | Saturday 12 July 2025 19:43:39 +0000 (0:00:01.340) 0:07:04.374 ********* 2025-07-12 19:44:11.279852 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.279863 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:11.279874 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:11.279884 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:11.279895 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:11.279906 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:11.279917 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:11.279928 | orchestrator | 2025-07-12 19:44:11.279939 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-12 19:44:11.279951 | orchestrator | Saturday 12 July 2025 19:43:41 +0000 (0:00:01.756) 0:07:06.131 ********* 2025-07-12 19:44:11.279962 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.279973 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:11.279984 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:11.279995 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:11.280005 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:11.280016 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:11.280027 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:11.280038 | orchestrator | 2025-07-12 19:44:11.280049 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 19:44:11.280060 | orchestrator | Saturday 12 July 2025 19:43:42 +0000 (0:00:01.643) 0:07:07.774 ********* 2025-07-12 19:44:11.280071 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.280082 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.280122 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.280134 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.280146 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.280158 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.280170 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.280182 | orchestrator | 2025-07-12 19:44:11.280195 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 19:44:11.280207 | orchestrator | Saturday 12 July 2025 19:43:43 +0000 (0:00:01.100) 0:07:08.875 ********* 2025-07-12 19:44:11.280220 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:11.280233 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:44:11.280245 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:44:11.280258 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:44:11.280270 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:44:11.280283 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:44:11.280295 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:44:11.280307 | orchestrator | 2025-07-12 19:44:11.280319 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-12 19:44:11.280332 | orchestrator | Saturday 12 July 2025 19:43:44 +0000 (0:00:00.849) 0:07:09.724 ********* 2025-07-12 19:44:11.280344 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:11.280356 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:44:11.280368 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:44:11.280380 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:44:11.280392 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:44:11.280404 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:44:11.280416 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:44:11.280427 | orchestrator | 2025-07-12 19:44:11.280439 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-12 19:44:11.280466 | orchestrator | Saturday 12 July 2025 19:43:45 +0000 (0:00:00.496) 0:07:10.220 ********* 2025-07-12 19:44:11.280479 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.280491 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.280503 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.280513 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.280524 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.280535 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.280546 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.280557 | orchestrator | 2025-07-12 19:44:11.280567 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-12 19:44:11.280578 | orchestrator | Saturday 12 July 2025 19:43:45 +0000 (0:00:00.654) 0:07:10.875 ********* 2025-07-12 19:44:11.280589 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.280600 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.280611 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.280622 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.280633 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.280643 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.280654 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.280665 | orchestrator | 2025-07-12 19:44:11.280675 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-12 19:44:11.280686 | orchestrator | Saturday 12 July 2025 19:43:46 +0000 (0:00:00.489) 0:07:11.364 ********* 2025-07-12 19:44:11.280697 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.280708 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.280719 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.280729 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.280747 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.280792 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.280812 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.280828 | orchestrator | 2025-07-12 19:44:11.280844 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-12 19:44:11.280861 | orchestrator | Saturday 12 July 2025 19:43:46 +0000 (0:00:00.496) 0:07:11.861 ********* 2025-07-12 19:44:11.280878 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.280912 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.280932 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.280945 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.280956 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.280967 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.280978 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.280989 | orchestrator | 2025-07-12 19:44:11.281000 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-12 19:44:11.281011 | orchestrator | Saturday 12 July 2025 19:43:52 +0000 (0:00:05.500) 0:07:17.361 ********* 2025-07-12 19:44:11.281022 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:11.281052 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:44:11.281063 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:44:11.281074 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:44:11.281085 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:44:11.281096 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:44:11.281107 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:44:11.281118 | orchestrator | 2025-07-12 19:44:11.281129 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-12 19:44:11.281140 | orchestrator | Saturday 12 July 2025 19:43:52 +0000 (0:00:00.521) 0:07:17.882 ********* 2025-07-12 19:44:11.281153 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:44:11.281167 | orchestrator | 2025-07-12 19:44:11.281178 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-12 19:44:11.281189 | orchestrator | Saturday 12 July 2025 19:43:53 +0000 (0:00:00.965) 0:07:18.848 ********* 2025-07-12 19:44:11.281200 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.281211 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.281222 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.281233 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.281244 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.281254 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.281265 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.281276 | orchestrator | 2025-07-12 19:44:11.281287 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-12 19:44:11.281298 | orchestrator | Saturday 12 July 2025 19:43:55 +0000 (0:00:02.080) 0:07:20.929 ********* 2025-07-12 19:44:11.281309 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.281319 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.281330 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.281341 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.281351 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.281362 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.281373 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.281384 | orchestrator | 2025-07-12 19:44:11.281394 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-12 19:44:11.281405 | orchestrator | Saturday 12 July 2025 19:43:56 +0000 (0:00:01.150) 0:07:22.079 ********* 2025-07-12 19:44:11.281416 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.281426 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.281437 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:11.281448 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:11.281458 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:11.281469 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:11.281480 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:11.281490 | orchestrator | 2025-07-12 19:44:11.281501 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-12 19:44:11.281512 | orchestrator | Saturday 12 July 2025 19:43:58 +0000 (0:00:01.062) 0:07:23.141 ********* 2025-07-12 19:44:11.281524 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281545 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281556 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281574 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281586 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281596 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281611 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 19:44:11.281629 | orchestrator | 2025-07-12 19:44:11.281656 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-12 19:44:11.281677 | orchestrator | Saturday 12 July 2025 19:43:59 +0000 (0:00:01.698) 0:07:24.839 ********* 2025-07-12 19:44:11.281694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:44:11.281711 | orchestrator | 2025-07-12 19:44:11.281727 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-12 19:44:11.281745 | orchestrator | Saturday 12 July 2025 19:44:00 +0000 (0:00:00.803) 0:07:25.643 ********* 2025-07-12 19:44:11.281789 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:11.281805 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:11.281823 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:11.281839 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:11.281857 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:11.281877 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:11.281894 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:11.281912 | orchestrator | 2025-07-12 19:44:11.281924 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-12 19:44:11.281935 | orchestrator | Saturday 12 July 2025 19:44:09 +0000 (0:00:09.019) 0:07:34.663 ********* 2025-07-12 19:44:11.281946 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:11.281956 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:11.281977 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:25.336853 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:25.336962 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:25.336976 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:25.336988 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:25.336999 | orchestrator | 2025-07-12 19:44:25.337012 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-12 19:44:25.337025 | orchestrator | Saturday 12 July 2025 19:44:11 +0000 (0:00:01.715) 0:07:36.378 ********* 2025-07-12 19:44:25.337036 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:25.337047 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:25.337058 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:25.337069 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:25.337080 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:25.337090 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:25.337101 | orchestrator | 2025-07-12 19:44:25.337113 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-12 19:44:25.337124 | orchestrator | Saturday 12 July 2025 19:44:12 +0000 (0:00:01.337) 0:07:37.716 ********* 2025-07-12 19:44:25.337136 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:25.337148 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:25.337158 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:25.337169 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:25.337209 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:25.337221 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:25.337232 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:25.337243 | orchestrator | 2025-07-12 19:44:25.337254 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-12 19:44:25.337264 | orchestrator | 2025-07-12 19:44:25.337275 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-12 19:44:25.337286 | orchestrator | Saturday 12 July 2025 19:44:14 +0000 (0:00:01.455) 0:07:39.172 ********* 2025-07-12 19:44:25.337297 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:25.337308 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:44:25.337318 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:44:25.337329 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:44:25.337340 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:44:25.337350 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:44:25.337362 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:44:25.337374 | orchestrator | 2025-07-12 19:44:25.337386 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-12 19:44:25.337398 | orchestrator | 2025-07-12 19:44:25.337410 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-12 19:44:25.337423 | orchestrator | Saturday 12 July 2025 19:44:14 +0000 (0:00:00.480) 0:07:39.652 ********* 2025-07-12 19:44:25.337435 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:25.337447 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:25.337459 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:25.337471 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:25.337483 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:25.337495 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:25.337507 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:25.337519 | orchestrator | 2025-07-12 19:44:25.337531 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-12 19:44:25.337543 | orchestrator | Saturday 12 July 2025 19:44:15 +0000 (0:00:01.342) 0:07:40.995 ********* 2025-07-12 19:44:25.337556 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:25.337569 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:25.337581 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:25.337593 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:25.337605 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:25.337617 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:25.337629 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:25.337641 | orchestrator | 2025-07-12 19:44:25.337653 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-12 19:44:25.337666 | orchestrator | Saturday 12 July 2025 19:44:17 +0000 (0:00:01.413) 0:07:42.408 ********* 2025-07-12 19:44:25.337678 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:25.337690 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:44:25.337702 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:44:25.337715 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:44:25.337725 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:44:25.337736 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:44:25.337817 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:44:25.337832 | orchestrator | 2025-07-12 19:44:25.337843 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-12 19:44:25.337854 | orchestrator | Saturday 12 July 2025 19:44:18 +0000 (0:00:00.938) 0:07:43.346 ********* 2025-07-12 19:44:25.337865 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:25.337876 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:25.337886 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:25.337897 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:25.337908 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:25.337918 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:25.337929 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:25.337939 | orchestrator | 2025-07-12 19:44:25.337950 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-12 19:44:25.337971 | orchestrator | 2025-07-12 19:44:25.337982 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-12 19:44:25.337993 | orchestrator | Saturday 12 July 2025 19:44:19 +0000 (0:00:01.248) 0:07:44.595 ********* 2025-07-12 19:44:25.338004 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:44:25.338053 | orchestrator | 2025-07-12 19:44:25.338067 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 19:44:25.338078 | orchestrator | Saturday 12 July 2025 19:44:20 +0000 (0:00:00.944) 0:07:45.539 ********* 2025-07-12 19:44:25.338089 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:25.338100 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:25.338111 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:25.338121 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:25.338132 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:25.338143 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:25.338154 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:25.338165 | orchestrator | 2025-07-12 19:44:25.338178 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 19:44:25.338220 | orchestrator | Saturday 12 July 2025 19:44:21 +0000 (0:00:00.839) 0:07:46.379 ********* 2025-07-12 19:44:25.338240 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:25.338260 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:25.338280 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:25.338292 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:25.338303 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:25.338314 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:25.338324 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:25.338335 | orchestrator | 2025-07-12 19:44:25.338346 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-12 19:44:25.338357 | orchestrator | Saturday 12 July 2025 19:44:22 +0000 (0:00:01.119) 0:07:47.499 ********* 2025-07-12 19:44:25.338368 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:44:25.338379 | orchestrator | 2025-07-12 19:44:25.338390 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 19:44:25.338401 | orchestrator | Saturday 12 July 2025 19:44:23 +0000 (0:00:00.928) 0:07:48.428 ********* 2025-07-12 19:44:25.338411 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:25.338422 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:44:25.338433 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:44:25.338443 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:44:25.338454 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:44:25.338464 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:44:25.338475 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:44:25.338486 | orchestrator | 2025-07-12 19:44:25.338496 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 19:44:25.338507 | orchestrator | Saturday 12 July 2025 19:44:24 +0000 (0:00:00.876) 0:07:49.305 ********* 2025-07-12 19:44:25.338518 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:25.338528 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:44:25.338539 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:44:25.338550 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:44:25.338560 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:44:25.338571 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:44:25.338582 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:44:25.338592 | orchestrator | 2025-07-12 19:44:25.338603 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:44:25.338615 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-12 19:44:25.338627 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 19:44:25.338647 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 19:44:25.338658 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 19:44:25.338669 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 19:44:25.338687 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 19:44:25.338698 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-12 19:44:25.338709 | orchestrator | 2025-07-12 19:44:25.338720 | orchestrator | 2025-07-12 19:44:25.338731 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:44:25.338741 | orchestrator | Saturday 12 July 2025 19:44:25 +0000 (0:00:01.113) 0:07:50.418 ********* 2025-07-12 19:44:25.338752 | orchestrator | =============================================================================== 2025-07-12 19:44:25.338806 | orchestrator | osism.commons.packages : Install required packages --------------------- 75.69s 2025-07-12 19:44:25.338819 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.84s 2025-07-12 19:44:25.338830 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.62s 2025-07-12 19:44:25.338840 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.95s 2025-07-12 19:44:25.338851 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.96s 2025-07-12 19:44:25.338863 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.75s 2025-07-12 19:44:25.338873 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.24s 2025-07-12 19:44:25.338884 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.41s 2025-07-12 19:44:25.338894 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.02s 2025-07-12 19:44:25.338905 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.96s 2025-07-12 19:44:25.338916 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.34s 2025-07-12 19:44:25.338927 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.28s 2025-07-12 19:44:25.338937 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.64s 2025-07-12 19:44:25.338948 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.59s 2025-07-12 19:44:25.338958 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.52s 2025-07-12 19:44:25.338977 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.44s 2025-07-12 19:44:25.786741 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.20s 2025-07-12 19:44:25.786863 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.12s 2025-07-12 19:44:25.786875 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.81s 2025-07-12 19:44:25.786883 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.50s 2025-07-12 19:44:26.051299 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 19:44:26.051390 | orchestrator | + osism apply network 2025-07-12 19:44:38.455682 | orchestrator | 2025-07-12 19:44:38 | INFO  | Task 4f503871-2c3f-4db1-b7c9-bf9aded850be (network) was prepared for execution. 2025-07-12 19:44:38.455792 | orchestrator | 2025-07-12 19:44:38 | INFO  | It takes a moment until task 4f503871-2c3f-4db1-b7c9-bf9aded850be (network) has been started and output is visible here. 2025-07-12 19:45:06.818667 | orchestrator | 2025-07-12 19:45:06.818866 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-12 19:45:06.818887 | orchestrator | 2025-07-12 19:45:06.818899 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-12 19:45:06.818911 | orchestrator | Saturday 12 July 2025 19:44:42 +0000 (0:00:00.268) 0:00:00.268 ********* 2025-07-12 19:45:06.818923 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.818935 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.818946 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.818957 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.818968 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.818979 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.818989 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.819000 | orchestrator | 2025-07-12 19:45:06.819011 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-12 19:45:06.819022 | orchestrator | Saturday 12 July 2025 19:44:43 +0000 (0:00:00.746) 0:00:01.015 ********* 2025-07-12 19:45:06.819049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:45:06.819065 | orchestrator | 2025-07-12 19:45:06.819079 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-12 19:45:06.819093 | orchestrator | Saturday 12 July 2025 19:44:44 +0000 (0:00:01.222) 0:00:02.237 ********* 2025-07-12 19:45:06.819106 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.819119 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.819132 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.819144 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.819156 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.819168 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.819181 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.819194 | orchestrator | 2025-07-12 19:45:06.819206 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-12 19:45:06.819218 | orchestrator | Saturday 12 July 2025 19:44:46 +0000 (0:00:01.963) 0:00:04.201 ********* 2025-07-12 19:45:06.819229 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.819240 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.819251 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.819262 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.819273 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.819284 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.819294 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.819305 | orchestrator | 2025-07-12 19:45:06.819327 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-12 19:45:06.819339 | orchestrator | Saturday 12 July 2025 19:44:48 +0000 (0:00:01.943) 0:00:06.144 ********* 2025-07-12 19:45:06.819350 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-12 19:45:06.819362 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-12 19:45:06.819373 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-12 19:45:06.819384 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-12 19:45:06.819395 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-12 19:45:06.819406 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-12 19:45:06.819417 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-12 19:45:06.819428 | orchestrator | 2025-07-12 19:45:06.819439 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-12 19:45:06.819450 | orchestrator | Saturday 12 July 2025 19:44:49 +0000 (0:00:00.973) 0:00:07.118 ********* 2025-07-12 19:45:06.819461 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:45:06.819473 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 19:45:06.819484 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 19:45:06.819521 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 19:45:06.819533 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 19:45:06.819544 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 19:45:06.819554 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 19:45:06.819566 | orchestrator | 2025-07-12 19:45:06.819577 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-12 19:45:06.819588 | orchestrator | Saturday 12 July 2025 19:44:52 +0000 (0:00:03.315) 0:00:10.434 ********* 2025-07-12 19:45:06.819599 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:06.819610 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:45:06.819621 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:45:06.819632 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:45:06.819643 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:45:06.819653 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:45:06.819664 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:45:06.819675 | orchestrator | 2025-07-12 19:45:06.819686 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-12 19:45:06.819697 | orchestrator | Saturday 12 July 2025 19:44:54 +0000 (0:00:01.412) 0:00:11.847 ********* 2025-07-12 19:45:06.819708 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:45:06.819719 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 19:45:06.819730 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 19:45:06.819741 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 19:45:06.819782 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 19:45:06.819816 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 19:45:06.819836 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 19:45:06.819853 | orchestrator | 2025-07-12 19:45:06.819872 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-12 19:45:06.819884 | orchestrator | Saturday 12 July 2025 19:44:56 +0000 (0:00:01.920) 0:00:13.768 ********* 2025-07-12 19:45:06.819895 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.819906 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.819917 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.819927 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.819938 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.819949 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.819960 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.819971 | orchestrator | 2025-07-12 19:45:06.819982 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-12 19:45:06.820028 | orchestrator | Saturday 12 July 2025 19:44:57 +0000 (0:00:01.047) 0:00:14.815 ********* 2025-07-12 19:45:06.820041 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:06.820052 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:45:06.820063 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:45:06.820074 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:45:06.820085 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:45:06.820095 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:45:06.820106 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:45:06.820117 | orchestrator | 2025-07-12 19:45:06.820128 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-12 19:45:06.820139 | orchestrator | Saturday 12 July 2025 19:44:57 +0000 (0:00:00.635) 0:00:15.450 ********* 2025-07-12 19:45:06.820150 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.820161 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.820172 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.820183 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.820194 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.820205 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.820216 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.820227 | orchestrator | 2025-07-12 19:45:06.820238 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-12 19:45:06.820249 | orchestrator | Saturday 12 July 2025 19:44:59 +0000 (0:00:02.089) 0:00:17.540 ********* 2025-07-12 19:45:06.820271 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:45:06.820282 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:45:06.820293 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:45:06.820303 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:45:06.820314 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:45:06.820325 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:45:06.820336 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-12 19:45:06.820348 | orchestrator | 2025-07-12 19:45:06.820359 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-12 19:45:06.820370 | orchestrator | Saturday 12 July 2025 19:45:00 +0000 (0:00:00.865) 0:00:18.406 ********* 2025-07-12 19:45:06.820381 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.820392 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:45:06.820403 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:45:06.820436 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:45:06.820448 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:45:06.820459 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:45:06.820470 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:45:06.820480 | orchestrator | 2025-07-12 19:45:06.820507 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-12 19:45:06.820518 | orchestrator | Saturday 12 July 2025 19:45:02 +0000 (0:00:01.672) 0:00:20.078 ********* 2025-07-12 19:45:06.820530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:45:06.820542 | orchestrator | 2025-07-12 19:45:06.820553 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 19:45:06.820564 | orchestrator | Saturday 12 July 2025 19:45:03 +0000 (0:00:01.238) 0:00:21.316 ********* 2025-07-12 19:45:06.820575 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.820586 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.820597 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.820608 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.820619 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.820629 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.820640 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.820650 | orchestrator | 2025-07-12 19:45:06.820662 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-12 19:45:06.820672 | orchestrator | Saturday 12 July 2025 19:45:04 +0000 (0:00:00.991) 0:00:22.308 ********* 2025-07-12 19:45:06.820683 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:06.820694 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:06.820704 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:06.820715 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:06.820725 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:06.820736 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:06.820747 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:06.820780 | orchestrator | 2025-07-12 19:45:06.820797 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 19:45:06.820809 | orchestrator | Saturday 12 July 2025 19:45:05 +0000 (0:00:00.839) 0:00:23.147 ********* 2025-07-12 19:45:06.820819 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820831 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820841 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820852 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820863 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820874 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820884 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820903 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820914 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820925 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 19:45:06.820935 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820946 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820957 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820968 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 19:45:06.820978 | orchestrator | 2025-07-12 19:45:06.820998 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-12 19:45:22.800939 | orchestrator | Saturday 12 July 2025 19:45:06 +0000 (0:00:01.207) 0:00:24.354 ********* 2025-07-12 19:45:22.801065 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:22.801078 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:45:22.801088 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:45:22.801098 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:45:22.801107 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:45:22.801115 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:45:22.801124 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:45:22.801134 | orchestrator | 2025-07-12 19:45:22.801144 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-12 19:45:22.801154 | orchestrator | Saturday 12 July 2025 19:45:07 +0000 (0:00:00.621) 0:00:24.976 ********* 2025-07-12 19:45:22.801165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-3, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-5 2025-07-12 19:45:22.801177 | orchestrator | 2025-07-12 19:45:22.801187 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-12 19:45:22.801197 | orchestrator | Saturday 12 July 2025 19:45:11 +0000 (0:00:04.449) 0:00:29.426 ********* 2025-07-12 19:45:22.801208 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801232 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801283 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801348 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801383 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801403 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801411 | orchestrator | 2025-07-12 19:45:22.801421 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-12 19:45:22.801430 | orchestrator | Saturday 12 July 2025 19:45:17 +0000 (0:00:05.732) 0:00:35.158 ********* 2025-07-12 19:45:22.801439 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801459 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801470 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801508 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 19:45:22.801538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:22.801575 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:28.928222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 19:45:28.928377 | orchestrator | 2025-07-12 19:45:28.928396 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-12 19:45:28.928410 | orchestrator | Saturday 12 July 2025 19:45:22 +0000 (0:00:05.167) 0:00:40.326 ********* 2025-07-12 19:45:28.928423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:45:28.928435 | orchestrator | 2025-07-12 19:45:28.928447 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 19:45:28.928458 | orchestrator | Saturday 12 July 2025 19:45:24 +0000 (0:00:01.255) 0:00:41.581 ********* 2025-07-12 19:45:28.928469 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:28.928502 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:45:28.928514 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:45:28.928525 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:45:28.928535 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:45:28.928546 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:45:28.928557 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:45:28.928568 | orchestrator | 2025-07-12 19:45:28.928579 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 19:45:28.928594 | orchestrator | Saturday 12 July 2025 19:45:25 +0000 (0:00:01.176) 0:00:42.758 ********* 2025-07-12 19:45:28.928642 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.928662 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.928681 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.928708 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.928726 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.928744 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.928790 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.928809 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:28.928829 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.928848 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.928867 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.928885 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.928905 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.928925 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:45:28.928939 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.928950 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.928961 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.928972 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.928982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:45:28.928993 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.929004 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.929015 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.929025 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.929036 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:45:28.929047 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.929058 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.929068 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.929079 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.929090 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:45:28.929101 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:45:28.929111 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 19:45:28.929122 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 19:45:28.929133 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 19:45:28.929143 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 19:45:28.929154 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:45:28.929165 | orchestrator | 2025-07-12 19:45:28.929176 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-12 19:45:28.929207 | orchestrator | Saturday 12 July 2025 19:45:27 +0000 (0:00:02.035) 0:00:44.793 ********* 2025-07-12 19:45:28.929219 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:28.929242 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:45:28.929253 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:45:28.929264 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:45:28.929275 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:45:28.929285 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:45:28.929296 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:45:28.929307 | orchestrator | 2025-07-12 19:45:28.929318 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-12 19:45:28.929329 | orchestrator | Saturday 12 July 2025 19:45:27 +0000 (0:00:00.622) 0:00:45.416 ********* 2025-07-12 19:45:28.929340 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:28.929351 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:45:28.929362 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:45:28.929372 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:45:28.929383 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:45:28.929394 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:45:28.929405 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:45:28.929416 | orchestrator | 2025-07-12 19:45:28.929427 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:45:28.929439 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 19:45:28.929451 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:45:28.929462 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:45:28.929473 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:45:28.929492 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:45:28.929503 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:45:28.929514 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:45:28.929525 | orchestrator | 2025-07-12 19:45:28.929536 | orchestrator | 2025-07-12 19:45:28.929547 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:45:28.929557 | orchestrator | Saturday 12 July 2025 19:45:28 +0000 (0:00:00.688) 0:00:46.105 ********* 2025-07-12 19:45:28.929569 | orchestrator | =============================================================================== 2025-07-12 19:45:28.929579 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.73s 2025-07-12 19:45:28.929590 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.17s 2025-07-12 19:45:28.929601 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.45s 2025-07-12 19:45:28.929612 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.32s 2025-07-12 19:45:28.929622 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.09s 2025-07-12 19:45:28.929633 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.04s 2025-07-12 19:45:28.929644 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2025-07-12 19:45:28.929655 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.94s 2025-07-12 19:45:28.929665 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.92s 2025-07-12 19:45:28.929676 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.67s 2025-07-12 19:45:28.929694 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.41s 2025-07-12 19:45:28.929705 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.26s 2025-07-12 19:45:28.929716 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2025-07-12 19:45:28.929726 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-07-12 19:45:28.929737 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2025-07-12 19:45:28.929748 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.18s 2025-07-12 19:45:28.929832 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.05s 2025-07-12 19:45:28.929845 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2025-07-12 19:45:28.929856 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-07-12 19:45:28.929867 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.87s 2025-07-12 19:45:29.198965 | orchestrator | + osism apply wireguard 2025-07-12 19:45:41.143397 | orchestrator | 2025-07-12 19:45:41 | INFO  | Task 069d109a-5549-4fc0-a30b-b22f9f713589 (wireguard) was prepared for execution. 2025-07-12 19:45:41.143473 | orchestrator | 2025-07-12 19:45:41 | INFO  | It takes a moment until task 069d109a-5549-4fc0-a30b-b22f9f713589 (wireguard) has been started and output is visible here. 2025-07-12 19:46:00.392168 | orchestrator | 2025-07-12 19:46:00.392273 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-12 19:46:00.392290 | orchestrator | 2025-07-12 19:46:00.392302 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-12 19:46:00.392314 | orchestrator | Saturday 12 July 2025 19:45:45 +0000 (0:00:00.228) 0:00:00.228 ********* 2025-07-12 19:46:00.392326 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:00.392337 | orchestrator | 2025-07-12 19:46:00.392349 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-12 19:46:00.392360 | orchestrator | Saturday 12 July 2025 19:45:46 +0000 (0:00:01.511) 0:00:01.739 ********* 2025-07-12 19:46:00.392371 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392383 | orchestrator | 2025-07-12 19:46:00.392394 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-12 19:46:00.392406 | orchestrator | Saturday 12 July 2025 19:45:52 +0000 (0:00:06.363) 0:00:08.103 ********* 2025-07-12 19:46:00.392416 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392427 | orchestrator | 2025-07-12 19:46:00.392438 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-12 19:46:00.392450 | orchestrator | Saturday 12 July 2025 19:45:53 +0000 (0:00:00.555) 0:00:08.658 ********* 2025-07-12 19:46:00.392460 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392471 | orchestrator | 2025-07-12 19:46:00.392483 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-12 19:46:00.392494 | orchestrator | Saturday 12 July 2025 19:45:53 +0000 (0:00:00.403) 0:00:09.062 ********* 2025-07-12 19:46:00.392505 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:00.392516 | orchestrator | 2025-07-12 19:46:00.392527 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-12 19:46:00.392538 | orchestrator | Saturday 12 July 2025 19:45:54 +0000 (0:00:00.516) 0:00:09.579 ********* 2025-07-12 19:46:00.392549 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:00.392560 | orchestrator | 2025-07-12 19:46:00.392571 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-12 19:46:00.392582 | orchestrator | Saturday 12 July 2025 19:45:54 +0000 (0:00:00.531) 0:00:10.110 ********* 2025-07-12 19:46:00.392593 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:00.392604 | orchestrator | 2025-07-12 19:46:00.392633 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-12 19:46:00.392646 | orchestrator | Saturday 12 July 2025 19:45:55 +0000 (0:00:00.416) 0:00:10.527 ********* 2025-07-12 19:46:00.392680 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392692 | orchestrator | 2025-07-12 19:46:00.392703 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-12 19:46:00.392714 | orchestrator | Saturday 12 July 2025 19:45:56 +0000 (0:00:01.139) 0:00:11.667 ********* 2025-07-12 19:46:00.392725 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:46:00.392736 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392747 | orchestrator | 2025-07-12 19:46:00.392795 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-12 19:46:00.392807 | orchestrator | Saturday 12 July 2025 19:45:57 +0000 (0:00:00.939) 0:00:12.606 ********* 2025-07-12 19:46:00.392818 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392829 | orchestrator | 2025-07-12 19:46:00.392839 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-12 19:46:00.392851 | orchestrator | Saturday 12 July 2025 19:45:59 +0000 (0:00:01.690) 0:00:14.297 ********* 2025-07-12 19:46:00.392861 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:00.392872 | orchestrator | 2025-07-12 19:46:00.392884 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:46:00.392895 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:46:00.392907 | orchestrator | 2025-07-12 19:46:00.392918 | orchestrator | 2025-07-12 19:46:00.392929 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:46:00.392940 | orchestrator | Saturday 12 July 2025 19:46:00 +0000 (0:00:00.871) 0:00:15.169 ********* 2025-07-12 19:46:00.392952 | orchestrator | =============================================================================== 2025-07-12 19:46:00.392963 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.36s 2025-07-12 19:46:00.392974 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-07-12 19:46:00.392985 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.51s 2025-07-12 19:46:00.392996 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2025-07-12 19:46:00.393007 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-07-12 19:46:00.393018 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.87s 2025-07-12 19:46:00.393029 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-07-12 19:46:00.393040 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-07-12 19:46:00.393051 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-07-12 19:46:00.393062 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-07-12 19:46:00.393074 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-07-12 19:46:00.648429 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-12 19:46:00.681189 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-12 19:46:00.681265 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-12 19:46:00.764592 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 179 0 --:--:-- --:--:-- --:--:-- 178 2025-07-12 19:46:00.777827 | orchestrator | + osism apply --environment custom workarounds 2025-07-12 19:46:02.558089 | orchestrator | 2025-07-12 19:46:02 | INFO  | Trying to run play workarounds in environment custom 2025-07-12 19:46:12.744594 | orchestrator | 2025-07-12 19:46:12 | INFO  | Task afd6927e-5983-43d4-81d8-aabb476d4944 (workarounds) was prepared for execution. 2025-07-12 19:46:12.744700 | orchestrator | 2025-07-12 19:46:12 | INFO  | It takes a moment until task afd6927e-5983-43d4-81d8-aabb476d4944 (workarounds) has been started and output is visible here. 2025-07-12 19:46:37.642185 | orchestrator | 2025-07-12 19:46:37.642299 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 19:46:37.642317 | orchestrator | 2025-07-12 19:46:37.642329 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-12 19:46:37.642341 | orchestrator | Saturday 12 July 2025 19:46:16 +0000 (0:00:00.142) 0:00:00.142 ********* 2025-07-12 19:46:37.642352 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642364 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642375 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642385 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642397 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642407 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642418 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-12 19:46:37.642429 | orchestrator | 2025-07-12 19:46:37.642440 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-12 19:46:37.642451 | orchestrator | 2025-07-12 19:46:37.642462 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 19:46:37.642490 | orchestrator | Saturday 12 July 2025 19:46:17 +0000 (0:00:00.735) 0:00:00.877 ********* 2025-07-12 19:46:37.642503 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:37.642515 | orchestrator | 2025-07-12 19:46:37.642526 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-12 19:46:37.642536 | orchestrator | 2025-07-12 19:46:37.642547 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 19:46:37.642558 | orchestrator | Saturday 12 July 2025 19:46:19 +0000 (0:00:02.254) 0:00:03.131 ********* 2025-07-12 19:46:37.642569 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:46:37.642580 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:46:37.642591 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:46:37.642601 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:46:37.642612 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:46:37.642622 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:46:37.642633 | orchestrator | 2025-07-12 19:46:37.642644 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-12 19:46:37.642655 | orchestrator | 2025-07-12 19:46:37.642666 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-12 19:46:37.642678 | orchestrator | Saturday 12 July 2025 19:46:21 +0000 (0:00:01.917) 0:00:05.048 ********* 2025-07-12 19:46:37.642691 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 19:46:37.642704 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 19:46:37.642717 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 19:46:37.642729 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 19:46:37.642742 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 19:46:37.642782 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 19:46:37.642803 | orchestrator | 2025-07-12 19:46:37.642826 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-12 19:46:37.642847 | orchestrator | Saturday 12 July 2025 19:46:22 +0000 (0:00:01.494) 0:00:06.543 ********* 2025-07-12 19:46:37.642865 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:46:37.642884 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:46:37.642902 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:46:37.642952 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:46:37.642972 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:46:37.642991 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:46:37.643009 | orchestrator | 2025-07-12 19:46:37.643027 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-12 19:46:37.643045 | orchestrator | Saturday 12 July 2025 19:46:26 +0000 (0:00:03.788) 0:00:10.332 ********* 2025-07-12 19:46:37.643063 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:46:37.643082 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:46:37.643099 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:46:37.643119 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:46:37.643130 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:46:37.643141 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:46:37.643152 | orchestrator | 2025-07-12 19:46:37.643163 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-12 19:46:37.643174 | orchestrator | 2025-07-12 19:46:37.643185 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-12 19:46:37.643196 | orchestrator | Saturday 12 July 2025 19:46:27 +0000 (0:00:00.717) 0:00:11.049 ********* 2025-07-12 19:46:37.643206 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:37.643217 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:46:37.643228 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:46:37.643239 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:46:37.643249 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:46:37.643260 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:46:37.643271 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:46:37.643282 | orchestrator | 2025-07-12 19:46:37.643293 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-12 19:46:37.643304 | orchestrator | Saturday 12 July 2025 19:46:29 +0000 (0:00:01.648) 0:00:12.698 ********* 2025-07-12 19:46:37.643315 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:37.643325 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:46:37.643336 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:46:37.643347 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:46:37.643358 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:46:37.643369 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:46:37.643399 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:46:37.643411 | orchestrator | 2025-07-12 19:46:37.643422 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-12 19:46:37.643433 | orchestrator | Saturday 12 July 2025 19:46:30 +0000 (0:00:01.794) 0:00:14.492 ********* 2025-07-12 19:46:37.643444 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:46:37.643454 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:46:37.643465 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:46:37.643476 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:37.643487 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:46:37.643498 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:46:37.643508 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:46:37.643519 | orchestrator | 2025-07-12 19:46:37.643530 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-12 19:46:37.643541 | orchestrator | Saturday 12 July 2025 19:46:32 +0000 (0:00:01.502) 0:00:15.995 ********* 2025-07-12 19:46:37.643552 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:37.643563 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:46:37.643574 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:46:37.643585 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:46:37.643596 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:46:37.643606 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:46:37.643617 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:46:37.643628 | orchestrator | 2025-07-12 19:46:37.643638 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-12 19:46:37.643658 | orchestrator | Saturday 12 July 2025 19:46:34 +0000 (0:00:01.779) 0:00:17.775 ********* 2025-07-12 19:46:37.643669 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:46:37.643690 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:46:37.643701 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:46:37.643712 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:46:37.643722 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:46:37.643733 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:46:37.643743 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:46:37.643795 | orchestrator | 2025-07-12 19:46:37.643886 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-12 19:46:37.643898 | orchestrator | 2025-07-12 19:46:37.643909 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-12 19:46:37.643920 | orchestrator | Saturday 12 July 2025 19:46:34 +0000 (0:00:00.624) 0:00:18.399 ********* 2025-07-12 19:46:37.643931 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:46:37.643941 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:46:37.643952 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:37.643963 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:46:37.643974 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:46:37.643984 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:46:37.643995 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:46:37.644006 | orchestrator | 2025-07-12 19:46:37.644016 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:46:37.644029 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:46:37.644048 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:37.644068 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:37.644087 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:37.644106 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:37.644125 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:37.644144 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:37.644164 | orchestrator | 2025-07-12 19:46:37.644182 | orchestrator | 2025-07-12 19:46:37.644203 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:46:37.644222 | orchestrator | Saturday 12 July 2025 19:46:37 +0000 (0:00:02.767) 0:00:21.166 ********* 2025-07-12 19:46:37.644241 | orchestrator | =============================================================================== 2025-07-12 19:46:37.644256 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.79s 2025-07-12 19:46:37.644267 | orchestrator | Install python3-docker -------------------------------------------------- 2.77s 2025-07-12 19:46:37.644278 | orchestrator | Apply netplan configuration --------------------------------------------- 2.25s 2025-07-12 19:46:37.644288 | orchestrator | Apply netplan configuration --------------------------------------------- 1.92s 2025-07-12 19:46:37.644299 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.79s 2025-07-12 19:46:37.644310 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.78s 2025-07-12 19:46:37.644321 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-07-12 19:46:37.644331 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.50s 2025-07-12 19:46:37.644342 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-07-12 19:46:37.644365 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.74s 2025-07-12 19:46:37.644376 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.72s 2025-07-12 19:46:37.644399 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.62s 2025-07-12 19:46:38.290576 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-12 19:46:50.191118 | orchestrator | 2025-07-12 19:46:50 | INFO  | Task f53a6c2b-6ef5-4cd0-a731-3171a2ee258e (reboot) was prepared for execution. 2025-07-12 19:46:50.191228 | orchestrator | 2025-07-12 19:46:50 | INFO  | It takes a moment until task f53a6c2b-6ef5-4cd0-a731-3171a2ee258e (reboot) has been started and output is visible here. 2025-07-12 19:46:59.307157 | orchestrator | 2025-07-12 19:46:59.307253 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 19:46:59.307269 | orchestrator | 2025-07-12 19:46:59.307282 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 19:46:59.307294 | orchestrator | Saturday 12 July 2025 19:46:53 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-07-12 19:46:59.307305 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:46:59.307316 | orchestrator | 2025-07-12 19:46:59.307328 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 19:46:59.307339 | orchestrator | Saturday 12 July 2025 19:46:53 +0000 (0:00:00.075) 0:00:00.238 ********* 2025-07-12 19:46:59.307350 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:46:59.307361 | orchestrator | 2025-07-12 19:46:59.307385 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 19:46:59.307397 | orchestrator | Saturday 12 July 2025 19:46:54 +0000 (0:00:00.885) 0:00:01.123 ********* 2025-07-12 19:46:59.307407 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:46:59.307419 | orchestrator | 2025-07-12 19:46:59.307430 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 19:46:59.307441 | orchestrator | 2025-07-12 19:46:59.307452 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 19:46:59.307463 | orchestrator | Saturday 12 July 2025 19:46:54 +0000 (0:00:00.088) 0:00:01.212 ********* 2025-07-12 19:46:59.307474 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:46:59.307485 | orchestrator | 2025-07-12 19:46:59.307496 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 19:46:59.307507 | orchestrator | Saturday 12 July 2025 19:46:55 +0000 (0:00:00.088) 0:00:01.300 ********* 2025-07-12 19:46:59.307518 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:46:59.307529 | orchestrator | 2025-07-12 19:46:59.307540 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 19:46:59.307551 | orchestrator | Saturday 12 July 2025 19:46:55 +0000 (0:00:00.617) 0:00:01.917 ********* 2025-07-12 19:46:59.307562 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:46:59.307573 | orchestrator | 2025-07-12 19:46:59.307584 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 19:46:59.307595 | orchestrator | 2025-07-12 19:46:59.307606 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 19:46:59.307617 | orchestrator | Saturday 12 July 2025 19:46:55 +0000 (0:00:00.098) 0:00:02.016 ********* 2025-07-12 19:46:59.307628 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:46:59.307639 | orchestrator | 2025-07-12 19:46:59.307650 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 19:46:59.307661 | orchestrator | Saturday 12 July 2025 19:46:55 +0000 (0:00:00.141) 0:00:02.158 ********* 2025-07-12 19:46:59.307672 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:46:59.307683 | orchestrator | 2025-07-12 19:46:59.307694 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 19:46:59.307705 | orchestrator | Saturday 12 July 2025 19:46:56 +0000 (0:00:00.651) 0:00:02.809 ********* 2025-07-12 19:46:59.307718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:46:59.307730 | orchestrator | 2025-07-12 19:46:59.307790 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 19:46:59.307804 | orchestrator | 2025-07-12 19:46:59.307816 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 19:46:59.307829 | orchestrator | Saturday 12 July 2025 19:46:56 +0000 (0:00:00.098) 0:00:02.907 ********* 2025-07-12 19:46:59.307841 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:46:59.307853 | orchestrator | 2025-07-12 19:46:59.307865 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 19:46:59.307877 | orchestrator | Saturday 12 July 2025 19:46:56 +0000 (0:00:00.094) 0:00:03.001 ********* 2025-07-12 19:46:59.307889 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:46:59.307900 | orchestrator | 2025-07-12 19:46:59.307911 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 19:46:59.307922 | orchestrator | Saturday 12 July 2025 19:46:57 +0000 (0:00:00.637) 0:00:03.639 ********* 2025-07-12 19:46:59.307933 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:46:59.307944 | orchestrator | 2025-07-12 19:46:59.307955 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 19:46:59.307966 | orchestrator | 2025-07-12 19:46:59.307977 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 19:46:59.307988 | orchestrator | Saturday 12 July 2025 19:46:57 +0000 (0:00:00.108) 0:00:03.748 ********* 2025-07-12 19:46:59.307999 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:46:59.308010 | orchestrator | 2025-07-12 19:46:59.308021 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 19:46:59.308032 | orchestrator | Saturday 12 July 2025 19:46:57 +0000 (0:00:00.089) 0:00:03.837 ********* 2025-07-12 19:46:59.308043 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:46:59.308053 | orchestrator | 2025-07-12 19:46:59.308064 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 19:46:59.308075 | orchestrator | Saturday 12 July 2025 19:46:58 +0000 (0:00:00.653) 0:00:04.491 ********* 2025-07-12 19:46:59.308086 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:46:59.308097 | orchestrator | 2025-07-12 19:46:59.308108 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 19:46:59.308119 | orchestrator | 2025-07-12 19:46:59.308129 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 19:46:59.308140 | orchestrator | Saturday 12 July 2025 19:46:58 +0000 (0:00:00.100) 0:00:04.592 ********* 2025-07-12 19:46:59.308151 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:46:59.308161 | orchestrator | 2025-07-12 19:46:59.308172 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 19:46:59.308183 | orchestrator | Saturday 12 July 2025 19:46:58 +0000 (0:00:00.094) 0:00:04.686 ********* 2025-07-12 19:46:59.308194 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:46:59.308204 | orchestrator | 2025-07-12 19:46:59.308215 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 19:46:59.308226 | orchestrator | Saturday 12 July 2025 19:46:59 +0000 (0:00:00.644) 0:00:05.331 ********* 2025-07-12 19:46:59.308252 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:46:59.308264 | orchestrator | 2025-07-12 19:46:59.308275 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:46:59.308286 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:59.308298 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:59.308309 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:59.308320 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:59.308340 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:59.308351 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:46:59.308362 | orchestrator | 2025-07-12 19:46:59.308373 | orchestrator | 2025-07-12 19:46:59.308383 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:46:59.308394 | orchestrator | Saturday 12 July 2025 19:46:59 +0000 (0:00:00.033) 0:00:05.365 ********* 2025-07-12 19:46:59.308405 | orchestrator | =============================================================================== 2025-07-12 19:46:59.308416 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.09s 2025-07-12 19:46:59.308427 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2025-07-12 19:46:59.308438 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2025-07-12 19:46:59.508513 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-12 19:47:11.233353 | orchestrator | 2025-07-12 19:47:11 | INFO  | Task 091d35bd-b51d-4473-a028-cbf396dddaeb (wait-for-connection) was prepared for execution. 2025-07-12 19:47:11.233461 | orchestrator | 2025-07-12 19:47:11 | INFO  | It takes a moment until task 091d35bd-b51d-4473-a028-cbf396dddaeb (wait-for-connection) has been started and output is visible here. 2025-07-12 19:47:27.159577 | orchestrator | 2025-07-12 19:47:27.159727 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-12 19:47:27.159795 | orchestrator | 2025-07-12 19:47:27.159824 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-12 19:47:27.159855 | orchestrator | Saturday 12 July 2025 19:47:15 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-07-12 19:47:27.159879 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:47:27.159904 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:47:27.159923 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:47:27.159943 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:47:27.159962 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:47:27.159981 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:47:27.160000 | orchestrator | 2025-07-12 19:47:27.160018 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:47:27.160038 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:47:27.160058 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:47:27.160079 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:47:27.160099 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:47:27.160118 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:47:27.160138 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:47:27.160158 | orchestrator | 2025-07-12 19:47:27.160179 | orchestrator | 2025-07-12 19:47:27.160199 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:47:27.160219 | orchestrator | Saturday 12 July 2025 19:47:26 +0000 (0:00:11.653) 0:00:11.889 ********* 2025-07-12 19:47:27.160239 | orchestrator | =============================================================================== 2025-07-12 19:47:27.160261 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.65s 2025-07-12 19:47:27.429227 | orchestrator | + osism apply hddtemp 2025-07-12 19:47:39.203955 | orchestrator | 2025-07-12 19:47:39 | INFO  | Task 118e881c-95c4-4fcc-8f1c-dd54e9fd7773 (hddtemp) was prepared for execution. 2025-07-12 19:47:39.204068 | orchestrator | 2025-07-12 19:47:39 | INFO  | It takes a moment until task 118e881c-95c4-4fcc-8f1c-dd54e9fd7773 (hddtemp) has been started and output is visible here. 2025-07-12 19:48:06.428953 | orchestrator | 2025-07-12 19:48:06.429077 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-12 19:48:06.429093 | orchestrator | 2025-07-12 19:48:06.429106 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-12 19:48:06.429117 | orchestrator | Saturday 12 July 2025 19:47:43 +0000 (0:00:00.264) 0:00:00.264 ********* 2025-07-12 19:48:06.429128 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:06.429141 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:48:06.429152 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:48:06.429163 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:48:06.429174 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:48:06.429185 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:48:06.429196 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:48:06.429207 | orchestrator | 2025-07-12 19:48:06.429219 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-12 19:48:06.429230 | orchestrator | Saturday 12 July 2025 19:47:43 +0000 (0:00:00.717) 0:00:00.982 ********* 2025-07-12 19:48:06.429259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:48:06.429274 | orchestrator | 2025-07-12 19:48:06.429285 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-12 19:48:06.429297 | orchestrator | Saturday 12 July 2025 19:47:45 +0000 (0:00:01.230) 0:00:02.212 ********* 2025-07-12 19:48:06.429307 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:06.429318 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:48:06.429329 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:48:06.429340 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:48:06.429351 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:48:06.429362 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:48:06.429373 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:48:06.429384 | orchestrator | 2025-07-12 19:48:06.429395 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-12 19:48:06.429406 | orchestrator | Saturday 12 July 2025 19:47:47 +0000 (0:00:02.019) 0:00:04.232 ********* 2025-07-12 19:48:06.429417 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:06.429429 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:48:06.429440 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:48:06.429451 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:48:06.429462 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:48:06.429473 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:48:06.429485 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:48:06.429499 | orchestrator | 2025-07-12 19:48:06.429511 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-12 19:48:06.429524 | orchestrator | Saturday 12 July 2025 19:47:48 +0000 (0:00:01.165) 0:00:05.398 ********* 2025-07-12 19:48:06.429536 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:48:06.429548 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:48:06.429563 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:48:06.429575 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:48:06.429586 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:48:06.429597 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:48:06.429608 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:06.429619 | orchestrator | 2025-07-12 19:48:06.429630 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-12 19:48:06.429641 | orchestrator | Saturday 12 July 2025 19:47:49 +0000 (0:00:01.118) 0:00:06.517 ********* 2025-07-12 19:48:06.429652 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:48:06.429687 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:48:06.429699 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:06.429709 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:48:06.429721 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:48:06.429731 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:48:06.429742 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:48:06.429800 | orchestrator | 2025-07-12 19:48:06.429812 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-12 19:48:06.429823 | orchestrator | Saturday 12 July 2025 19:47:50 +0000 (0:00:00.818) 0:00:07.335 ********* 2025-07-12 19:48:06.429833 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:06.429844 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:48:06.429855 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:48:06.429866 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:48:06.429877 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:48:06.429888 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:48:06.429898 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:48:06.429909 | orchestrator | 2025-07-12 19:48:06.429920 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-12 19:48:06.429931 | orchestrator | Saturday 12 July 2025 19:48:02 +0000 (0:00:12.568) 0:00:19.904 ********* 2025-07-12 19:48:06.429942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:48:06.429954 | orchestrator | 2025-07-12 19:48:06.429965 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-12 19:48:06.429976 | orchestrator | Saturday 12 July 2025 19:48:04 +0000 (0:00:01.340) 0:00:21.245 ********* 2025-07-12 19:48:06.429987 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:06.429998 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:48:06.430008 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:48:06.430081 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:48:06.430093 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:48:06.430103 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:48:06.430114 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:48:06.430125 | orchestrator | 2025-07-12 19:48:06.430136 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:48:06.430147 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:48:06.430178 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:48:06.430191 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:48:06.430202 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:48:06.430213 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:48:06.430231 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:48:06.430242 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:48:06.430253 | orchestrator | 2025-07-12 19:48:06.430264 | orchestrator | 2025-07-12 19:48:06.430277 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:48:06.430296 | orchestrator | Saturday 12 July 2025 19:48:06 +0000 (0:00:01.899) 0:00:23.145 ********* 2025-07-12 19:48:06.430329 | orchestrator | =============================================================================== 2025-07-12 19:48:06.430344 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.57s 2025-07-12 19:48:06.430356 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.02s 2025-07-12 19:48:06.430366 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.90s 2025-07-12 19:48:06.430377 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.34s 2025-07-12 19:48:06.430388 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.23s 2025-07-12 19:48:06.430399 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.17s 2025-07-12 19:48:06.430409 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.12s 2025-07-12 19:48:06.430420 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.82s 2025-07-12 19:48:06.430431 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.72s 2025-07-12 19:48:06.698113 | orchestrator | ++ semver latest 7.1.1 2025-07-12 19:48:06.747035 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 19:48:06.747137 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 19:48:06.747153 | orchestrator | + sudo systemctl restart manager.service 2025-07-12 19:48:20.207739 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 19:48:20.207900 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 19:48:20.207916 | orchestrator | + local max_attempts=60 2025-07-12 19:48:20.207930 | orchestrator | + local name=ceph-ansible 2025-07-12 19:48:20.207941 | orchestrator | + local attempt_num=1 2025-07-12 19:48:20.207953 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:20.243722 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:20.243831 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:20.243846 | orchestrator | + sleep 5 2025-07-12 19:48:25.252144 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:25.284079 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:25.284163 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:25.284174 | orchestrator | + sleep 5 2025-07-12 19:48:30.287623 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:30.325478 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:30.325579 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:30.325596 | orchestrator | + sleep 5 2025-07-12 19:48:35.330631 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:35.365416 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:35.365522 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:35.365537 | orchestrator | + sleep 5 2025-07-12 19:48:40.368708 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:40.410859 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:40.410958 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:40.410974 | orchestrator | + sleep 5 2025-07-12 19:48:45.415309 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:45.452357 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:45.452449 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:45.452463 | orchestrator | + sleep 5 2025-07-12 19:48:50.457343 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:50.492970 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:50.493029 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:50.493043 | orchestrator | + sleep 5 2025-07-12 19:48:55.496226 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:48:55.529121 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 19:48:55.529183 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:48:55.529197 | orchestrator | + sleep 5 2025-07-12 19:49:00.536911 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:49:00.566431 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:00.566512 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:49:00.566528 | orchestrator | + sleep 5 2025-07-12 19:49:05.569269 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:49:05.606966 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:05.607046 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:49:05.607069 | orchestrator | + sleep 5 2025-07-12 19:49:10.611986 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:49:10.650365 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:10.650434 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:49:10.650443 | orchestrator | + sleep 5 2025-07-12 19:49:15.655809 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:49:15.695817 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:15.695913 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:49:15.695931 | orchestrator | + sleep 5 2025-07-12 19:49:20.700927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:49:20.736858 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:20.736920 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 19:49:20.736933 | orchestrator | + sleep 5 2025-07-12 19:49:25.741251 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:49:25.782013 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:25.782158 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 19:49:25.782174 | orchestrator | + local max_attempts=60 2025-07-12 19:49:25.782187 | orchestrator | + local name=kolla-ansible 2025-07-12 19:49:25.782199 | orchestrator | + local attempt_num=1 2025-07-12 19:49:25.782211 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 19:49:25.805051 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:25.805134 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 19:49:25.805147 | orchestrator | + local max_attempts=60 2025-07-12 19:49:25.805159 | orchestrator | + local name=osism-ansible 2025-07-12 19:49:25.805171 | orchestrator | + local attempt_num=1 2025-07-12 19:49:25.805303 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 19:49:25.842910 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:49:25.842991 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 19:49:25.843029 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 19:49:25.999213 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-12 19:49:26.151626 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-12 19:49:26.322630 | orchestrator | ARA in osism-ansible already disabled. 2025-07-12 19:49:26.492891 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-12 19:49:26.493067 | orchestrator | + osism apply gather-facts 2025-07-12 19:49:38.451349 | orchestrator | 2025-07-12 19:49:38 | INFO  | Task a39faa71-a92a-4af9-941a-f92a516717f7 (gather-facts) was prepared for execution. 2025-07-12 19:49:38.451443 | orchestrator | 2025-07-12 19:49:38 | INFO  | It takes a moment until task a39faa71-a92a-4af9-941a-f92a516717f7 (gather-facts) has been started and output is visible here. 2025-07-12 19:49:51.774509 | orchestrator | 2025-07-12 19:49:51.774621 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:49:51.774638 | orchestrator | 2025-07-12 19:49:51.774650 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:49:51.774665 | orchestrator | Saturday 12 July 2025 19:49:42 +0000 (0:00:00.218) 0:00:00.218 ********* 2025-07-12 19:49:51.774683 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:49:51.774703 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:49:51.774735 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:49:51.774820 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:51.774837 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:49:51.774854 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:49:51.774871 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:49:51.774887 | orchestrator | 2025-07-12 19:49:51.774906 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 19:49:51.774924 | orchestrator | 2025-07-12 19:49:51.774943 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 19:49:51.774963 | orchestrator | Saturday 12 July 2025 19:49:50 +0000 (0:00:08.479) 0:00:08.697 ********* 2025-07-12 19:49:51.775027 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:51.775052 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:49:51.775099 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:49:51.775118 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:49:51.775137 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:49:51.775156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:49:51.775175 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:49:51.775194 | orchestrator | 2025-07-12 19:49:51.775213 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:49:51.775232 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775253 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775270 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775289 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775308 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775327 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775346 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:49:51.775365 | orchestrator | 2025-07-12 19:49:51.775383 | orchestrator | 2025-07-12 19:49:51.775403 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:49:51.775422 | orchestrator | Saturday 12 July 2025 19:49:51 +0000 (0:00:00.521) 0:00:09.219 ********* 2025-07-12 19:49:51.775440 | orchestrator | =============================================================================== 2025-07-12 19:49:51.775457 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.48s 2025-07-12 19:49:51.775468 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-07-12 19:49:52.057250 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-12 19:49:52.074229 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-12 19:49:52.102226 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-12 19:49:52.114262 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-12 19:49:52.124613 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-12 19:49:52.136704 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-12 19:49:52.147000 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-12 19:49:52.158196 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-12 19:49:52.168988 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-12 19:49:52.179695 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-12 19:49:52.190854 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-12 19:49:52.202136 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-12 19:49:52.213321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-12 19:49:52.222793 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-12 19:49:52.233558 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-12 19:49:52.243320 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-12 19:49:52.253226 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-12 19:49:52.263432 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-12 19:49:52.273643 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-12 19:49:52.282887 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-12 19:49:52.292007 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-12 19:49:52.517674 | orchestrator | ok: Runtime: 0:22:31.999968 2025-07-12 19:49:52.633980 | 2025-07-12 19:49:52.634150 | TASK [Deploy services] 2025-07-12 19:49:53.166387 | orchestrator | skipping: Conditional result was False 2025-07-12 19:49:53.182261 | 2025-07-12 19:49:53.182454 | TASK [Deploy in a nutshell] 2025-07-12 19:49:53.897216 | orchestrator | 2025-07-12 19:49:53.897405 | orchestrator | # PULL IMAGES 2025-07-12 19:49:53.897426 | orchestrator | 2025-07-12 19:49:53.897439 | orchestrator | + set -e 2025-07-12 19:49:53.897456 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 19:49:53.897474 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 19:49:53.897488 | orchestrator | ++ INTERACTIVE=false 2025-07-12 19:49:53.897532 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 19:49:53.897553 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 19:49:53.897565 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 19:49:53.897576 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 19:49:53.897592 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 19:49:53.897603 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 19:49:53.897619 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 19:49:53.897629 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 19:49:53.897645 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 19:49:53.897655 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 19:49:53.897669 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 19:49:53.897680 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 19:49:53.897694 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 19:49:53.897704 | orchestrator | ++ export ARA=false 2025-07-12 19:49:53.897714 | orchestrator | ++ ARA=false 2025-07-12 19:49:53.897724 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 19:49:53.897734 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 19:49:53.897799 | orchestrator | ++ export TEMPEST=false 2025-07-12 19:49:53.897811 | orchestrator | ++ TEMPEST=false 2025-07-12 19:49:53.897821 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 19:49:53.897831 | orchestrator | ++ IS_ZUUL=true 2025-07-12 19:49:53.897841 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:49:53.897851 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 19:49:53.897861 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 19:49:53.897871 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 19:49:53.897881 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 19:49:53.897891 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 19:49:53.897901 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 19:49:53.897931 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 19:49:53.897941 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 19:49:53.897950 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 19:49:53.897960 | orchestrator | + echo 2025-07-12 19:49:53.897977 | orchestrator | + echo '# PULL IMAGES' 2025-07-12 19:49:53.897987 | orchestrator | + echo 2025-07-12 19:49:53.898010 | orchestrator | ++ semver latest 7.0.0 2025-07-12 19:49:53.951004 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-12 19:49:53.951078 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 19:49:53.951085 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-07-12 19:49:55.692782 | orchestrator | 2025-07-12 19:49:55 | INFO  | Trying to run play pull-images in environment custom 2025-07-12 19:50:05.834358 | orchestrator | 2025-07-12 19:50:05 | INFO  | Task 591bbc83-908a-4428-819e-6dfbabf6bba1 (pull-images) was prepared for execution. 2025-07-12 19:50:05.834496 | orchestrator | 2025-07-12 19:50:05 | INFO  | Task 591bbc83-908a-4428-819e-6dfbabf6bba1 is running in background. No more output. Check ARA for logs. 2025-07-12 19:50:07.999501 | orchestrator | 2025-07-12 19:50:07 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-12 19:50:18.255580 | orchestrator | 2025-07-12 19:50:18 | INFO  | Task a0adcb58-615d-4284-bd9a-b2896c71f778 (wipe-partitions) was prepared for execution. 2025-07-12 19:50:18.255672 | orchestrator | 2025-07-12 19:50:18 | INFO  | It takes a moment until task a0adcb58-615d-4284-bd9a-b2896c71f778 (wipe-partitions) has been started and output is visible here. 2025-07-12 19:50:29.955729 | orchestrator | 2025-07-12 19:50:29.955896 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-12 19:50:29.955913 | orchestrator | 2025-07-12 19:50:29.955926 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-12 19:50:29.955946 | orchestrator | Saturday 12 July 2025 19:50:22 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-07-12 19:50:29.955958 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:50:29.955970 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:50:29.955982 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:50:29.955994 | orchestrator | 2025-07-12 19:50:29.956005 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-12 19:50:29.956041 | orchestrator | Saturday 12 July 2025 19:50:22 +0000 (0:00:00.548) 0:00:00.685 ********* 2025-07-12 19:50:29.956053 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:50:29.956064 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:50:29.956079 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:50:29.956091 | orchestrator | 2025-07-12 19:50:29.956102 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-12 19:50:29.956113 | orchestrator | Saturday 12 July 2025 19:50:22 +0000 (0:00:00.216) 0:00:00.901 ********* 2025-07-12 19:50:29.956124 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:50:29.956136 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:50:29.956147 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:50:29.956157 | orchestrator | 2025-07-12 19:50:29.956169 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-12 19:50:29.956179 | orchestrator | Saturday 12 July 2025 19:50:23 +0000 (0:00:00.628) 0:00:01.529 ********* 2025-07-12 19:50:29.956191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:50:29.956201 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:50:29.956212 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:50:29.956223 | orchestrator | 2025-07-12 19:50:29.956234 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-12 19:50:29.956245 | orchestrator | Saturday 12 July 2025 19:50:23 +0000 (0:00:00.222) 0:00:01.752 ********* 2025-07-12 19:50:29.956258 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 19:50:29.956275 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 19:50:29.956287 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 19:50:29.956300 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 19:50:29.956313 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 19:50:29.956325 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 19:50:29.956337 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 19:50:29.956349 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 19:50:29.956361 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 19:50:29.956373 | orchestrator | 2025-07-12 19:50:29.956385 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-12 19:50:29.956398 | orchestrator | Saturday 12 July 2025 19:50:24 +0000 (0:00:01.116) 0:00:02.869 ********* 2025-07-12 19:50:29.956410 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 19:50:29.956423 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 19:50:29.956435 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 19:50:29.956448 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 19:50:29.956460 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 19:50:29.956472 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 19:50:29.956485 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 19:50:29.956497 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 19:50:29.956509 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 19:50:29.956521 | orchestrator | 2025-07-12 19:50:29.956533 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-12 19:50:29.956546 | orchestrator | Saturday 12 July 2025 19:50:26 +0000 (0:00:01.272) 0:00:04.141 ********* 2025-07-12 19:50:29.956558 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 19:50:29.956643 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 19:50:29.956657 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 19:50:29.956669 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 19:50:29.956696 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 19:50:29.956718 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 19:50:29.956729 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 19:50:29.956769 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 19:50:29.956788 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 19:50:29.956800 | orchestrator | 2025-07-12 19:50:29.956811 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-12 19:50:29.956822 | orchestrator | Saturday 12 July 2025 19:50:28 +0000 (0:00:02.242) 0:00:06.383 ********* 2025-07-12 19:50:29.956833 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:50:29.956844 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:50:29.956855 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:50:29.956866 | orchestrator | 2025-07-12 19:50:29.956877 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-12 19:50:29.956888 | orchestrator | Saturday 12 July 2025 19:50:29 +0000 (0:00:00.580) 0:00:06.964 ********* 2025-07-12 19:50:29.956899 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:50:29.956910 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:50:29.956921 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:50:29.956932 | orchestrator | 2025-07-12 19:50:29.956944 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:50:29.956957 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:29.956970 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:29.956999 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:29.957011 | orchestrator | 2025-07-12 19:50:29.957022 | orchestrator | 2025-07-12 19:50:29.957034 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:50:29.957045 | orchestrator | Saturday 12 July 2025 19:50:29 +0000 (0:00:00.628) 0:00:07.593 ********* 2025-07-12 19:50:29.957056 | orchestrator | =============================================================================== 2025-07-12 19:50:29.957067 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.24s 2025-07-12 19:50:29.957078 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.27s 2025-07-12 19:50:29.957089 | orchestrator | Check device availability ----------------------------------------------- 1.12s 2025-07-12 19:50:29.957100 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-07-12 19:50:29.957111 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.63s 2025-07-12 19:50:29.957122 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-07-12 19:50:29.957133 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-07-12 19:50:29.957144 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.22s 2025-07-12 19:50:29.957155 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-07-12 19:50:42.044144 | orchestrator | 2025-07-12 19:50:42 | INFO  | Task 6be3d444-f756-4da3-a6a6-4ce91859de18 (facts) was prepared for execution. 2025-07-12 19:50:42.044240 | orchestrator | 2025-07-12 19:50:42 | INFO  | It takes a moment until task 6be3d444-f756-4da3-a6a6-4ce91859de18 (facts) has been started and output is visible here. 2025-07-12 19:50:53.307225 | orchestrator | 2025-07-12 19:50:53.307333 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 19:50:53.307350 | orchestrator | 2025-07-12 19:50:53.307362 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 19:50:53.307374 | orchestrator | Saturday 12 July 2025 19:50:45 +0000 (0:00:00.203) 0:00:00.203 ********* 2025-07-12 19:50:53.307385 | orchestrator | ok: [testbed-manager] 2025-07-12 19:50:53.307397 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:50:53.307408 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:50:53.307449 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:50:53.307461 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:50:53.307471 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:50:53.307491 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:50:53.307509 | orchestrator | 2025-07-12 19:50:53.307526 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 19:50:53.307548 | orchestrator | Saturday 12 July 2025 19:50:46 +0000 (0:00:00.977) 0:00:01.181 ********* 2025-07-12 19:50:53.307567 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:50:53.307587 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:50:53.307602 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:50:53.307613 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:50:53.307624 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:50:53.307635 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:50:53.307647 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:50:53.307665 | orchestrator | 2025-07-12 19:50:53.307683 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:50:53.307703 | orchestrator | 2025-07-12 19:50:53.307766 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:50:53.307785 | orchestrator | Saturday 12 July 2025 19:50:47 +0000 (0:00:01.043) 0:00:02.225 ********* 2025-07-12 19:50:53.307798 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:50:53.307810 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:50:53.307825 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:50:53.307837 | orchestrator | ok: [testbed-manager] 2025-07-12 19:50:53.307849 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:50:53.307861 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:50:53.307873 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:50:53.307885 | orchestrator | 2025-07-12 19:50:53.307896 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 19:50:53.307907 | orchestrator | 2025-07-12 19:50:53.307918 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 19:50:53.307929 | orchestrator | Saturday 12 July 2025 19:50:52 +0000 (0:00:04.721) 0:00:06.947 ********* 2025-07-12 19:50:53.307939 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:50:53.307950 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:50:53.307961 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:50:53.307972 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:50:53.307983 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:50:53.307993 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:50:53.308004 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:50:53.308015 | orchestrator | 2025-07-12 19:50:53.308029 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:50:53.308048 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308068 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308087 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308106 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308118 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308129 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308140 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:50:53.308150 | orchestrator | 2025-07-12 19:50:53.308170 | orchestrator | 2025-07-12 19:50:53.308181 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:50:53.308192 | orchestrator | Saturday 12 July 2025 19:50:52 +0000 (0:00:00.498) 0:00:07.445 ********* 2025-07-12 19:50:53.308203 | orchestrator | =============================================================================== 2025-07-12 19:50:53.308214 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-07-12 19:50:53.308225 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2025-07-12 19:50:53.308236 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.98s 2025-07-12 19:50:53.308247 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-07-12 19:50:55.752968 | orchestrator | 2025-07-12 19:50:55 | INFO  | Task b22a7d44-fa1d-4d77-83a7-967595173f44 (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-12 19:50:55.753058 | orchestrator | 2025-07-12 19:50:55 | INFO  | It takes a moment until task b22a7d44-fa1d-4d77-83a7-967595173f44 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-12 19:51:06.536057 | orchestrator | 2025-07-12 19:51:06.536163 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 19:51:06.536178 | orchestrator | 2025-07-12 19:51:06.536188 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 19:51:06.536199 | orchestrator | Saturday 12 July 2025 19:50:59 +0000 (0:00:00.240) 0:00:00.240 ********* 2025-07-12 19:51:06.536209 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 19:51:06.536219 | orchestrator | 2025-07-12 19:51:06.536229 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 19:51:06.536239 | orchestrator | Saturday 12 July 2025 19:50:59 +0000 (0:00:00.211) 0:00:00.452 ********* 2025-07-12 19:51:06.536249 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:51:06.536259 | orchestrator | 2025-07-12 19:51:06.536269 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536279 | orchestrator | Saturday 12 July 2025 19:51:00 +0000 (0:00:00.217) 0:00:00.670 ********* 2025-07-12 19:51:06.536289 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 19:51:06.536299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 19:51:06.536309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 19:51:06.536327 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 19:51:06.536338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 19:51:06.536347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 19:51:06.536357 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 19:51:06.536366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 19:51:06.536376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 19:51:06.536386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 19:51:06.536395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 19:51:06.536405 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 19:51:06.536415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 19:51:06.536424 | orchestrator | 2025-07-12 19:51:06.536434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536444 | orchestrator | Saturday 12 July 2025 19:51:00 +0000 (0:00:00.323) 0:00:00.993 ********* 2025-07-12 19:51:06.536454 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536489 | orchestrator | 2025-07-12 19:51:06.536500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536510 | orchestrator | Saturday 12 July 2025 19:51:00 +0000 (0:00:00.356) 0:00:01.350 ********* 2025-07-12 19:51:06.536519 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536529 | orchestrator | 2025-07-12 19:51:06.536538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536548 | orchestrator | Saturday 12 July 2025 19:51:00 +0000 (0:00:00.176) 0:00:01.526 ********* 2025-07-12 19:51:06.536557 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536567 | orchestrator | 2025-07-12 19:51:06.536576 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536586 | orchestrator | Saturday 12 July 2025 19:51:01 +0000 (0:00:00.173) 0:00:01.699 ********* 2025-07-12 19:51:06.536598 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536614 | orchestrator | 2025-07-12 19:51:06.536626 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536637 | orchestrator | Saturday 12 July 2025 19:51:01 +0000 (0:00:00.162) 0:00:01.861 ********* 2025-07-12 19:51:06.536648 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536659 | orchestrator | 2025-07-12 19:51:06.536671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536682 | orchestrator | Saturday 12 July 2025 19:51:01 +0000 (0:00:00.197) 0:00:02.059 ********* 2025-07-12 19:51:06.536693 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536704 | orchestrator | 2025-07-12 19:51:06.536715 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536726 | orchestrator | Saturday 12 July 2025 19:51:01 +0000 (0:00:00.176) 0:00:02.236 ********* 2025-07-12 19:51:06.536759 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536771 | orchestrator | 2025-07-12 19:51:06.536782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536793 | orchestrator | Saturday 12 July 2025 19:51:01 +0000 (0:00:00.172) 0:00:02.408 ********* 2025-07-12 19:51:06.536804 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.536814 | orchestrator | 2025-07-12 19:51:06.536826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536837 | orchestrator | Saturday 12 July 2025 19:51:01 +0000 (0:00:00.201) 0:00:02.609 ********* 2025-07-12 19:51:06.536847 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518) 2025-07-12 19:51:06.536859 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518) 2025-07-12 19:51:06.536869 | orchestrator | 2025-07-12 19:51:06.536880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536891 | orchestrator | Saturday 12 July 2025 19:51:02 +0000 (0:00:00.367) 0:00:02.977 ********* 2025-07-12 19:51:06.536917 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9) 2025-07-12 19:51:06.536929 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9) 2025-07-12 19:51:06.536941 | orchestrator | 2025-07-12 19:51:06.536951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.536961 | orchestrator | Saturday 12 July 2025 19:51:02 +0000 (0:00:00.369) 0:00:03.346 ********* 2025-07-12 19:51:06.536976 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94) 2025-07-12 19:51:06.536991 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94) 2025-07-12 19:51:06.537006 | orchestrator | 2025-07-12 19:51:06.537016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.537025 | orchestrator | Saturday 12 July 2025 19:51:03 +0000 (0:00:00.496) 0:00:03.843 ********* 2025-07-12 19:51:06.537034 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418) 2025-07-12 19:51:06.537051 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418) 2025-07-12 19:51:06.537061 | orchestrator | 2025-07-12 19:51:06.537071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:06.537080 | orchestrator | Saturday 12 July 2025 19:51:03 +0000 (0:00:00.510) 0:00:04.353 ********* 2025-07-12 19:51:06.537090 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 19:51:06.537099 | orchestrator | 2025-07-12 19:51:06.537109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537119 | orchestrator | Saturday 12 July 2025 19:51:04 +0000 (0:00:00.750) 0:00:05.103 ********* 2025-07-12 19:51:06.537128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 19:51:06.537138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 19:51:06.537147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 19:51:06.537157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 19:51:06.537166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 19:51:06.537175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 19:51:06.537185 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 19:51:06.537194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 19:51:06.537204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 19:51:06.537213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 19:51:06.537223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 19:51:06.537232 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 19:51:06.537242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 19:51:06.537251 | orchestrator | 2025-07-12 19:51:06.537261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537271 | orchestrator | Saturday 12 July 2025 19:51:04 +0000 (0:00:00.388) 0:00:05.492 ********* 2025-07-12 19:51:06.537280 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537290 | orchestrator | 2025-07-12 19:51:06.537299 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537309 | orchestrator | Saturday 12 July 2025 19:51:05 +0000 (0:00:00.203) 0:00:05.696 ********* 2025-07-12 19:51:06.537318 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537328 | orchestrator | 2025-07-12 19:51:06.537337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537347 | orchestrator | Saturday 12 July 2025 19:51:05 +0000 (0:00:00.198) 0:00:05.894 ********* 2025-07-12 19:51:06.537356 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537366 | orchestrator | 2025-07-12 19:51:06.537375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537385 | orchestrator | Saturday 12 July 2025 19:51:05 +0000 (0:00:00.226) 0:00:06.121 ********* 2025-07-12 19:51:06.537394 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537404 | orchestrator | 2025-07-12 19:51:06.537413 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537423 | orchestrator | Saturday 12 July 2025 19:51:05 +0000 (0:00:00.201) 0:00:06.322 ********* 2025-07-12 19:51:06.537432 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537442 | orchestrator | 2025-07-12 19:51:06.537457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537467 | orchestrator | Saturday 12 July 2025 19:51:05 +0000 (0:00:00.201) 0:00:06.524 ********* 2025-07-12 19:51:06.537476 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537486 | orchestrator | 2025-07-12 19:51:06.537495 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537505 | orchestrator | Saturday 12 July 2025 19:51:06 +0000 (0:00:00.219) 0:00:06.744 ********* 2025-07-12 19:51:06.537514 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:06.537524 | orchestrator | 2025-07-12 19:51:06.537533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:06.537543 | orchestrator | Saturday 12 July 2025 19:51:06 +0000 (0:00:00.198) 0:00:06.943 ********* 2025-07-12 19:51:06.537558 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.529984 | orchestrator | 2025-07-12 19:51:13.530099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:13.530112 | orchestrator | Saturday 12 July 2025 19:51:06 +0000 (0:00:00.209) 0:00:07.153 ********* 2025-07-12 19:51:13.530121 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 19:51:13.530129 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 19:51:13.530137 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 19:51:13.530144 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 19:51:13.530151 | orchestrator | 2025-07-12 19:51:13.530159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:13.530166 | orchestrator | Saturday 12 July 2025 19:51:07 +0000 (0:00:00.986) 0:00:08.140 ********* 2025-07-12 19:51:13.530186 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530193 | orchestrator | 2025-07-12 19:51:13.530201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:13.530208 | orchestrator | Saturday 12 July 2025 19:51:07 +0000 (0:00:00.186) 0:00:08.326 ********* 2025-07-12 19:51:13.530215 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530223 | orchestrator | 2025-07-12 19:51:13.530230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:13.530237 | orchestrator | Saturday 12 July 2025 19:51:07 +0000 (0:00:00.198) 0:00:08.524 ********* 2025-07-12 19:51:13.530244 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530251 | orchestrator | 2025-07-12 19:51:13.530259 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:13.530266 | orchestrator | Saturday 12 July 2025 19:51:08 +0000 (0:00:00.203) 0:00:08.728 ********* 2025-07-12 19:51:13.530273 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530280 | orchestrator | 2025-07-12 19:51:13.530288 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 19:51:13.530295 | orchestrator | Saturday 12 July 2025 19:51:08 +0000 (0:00:00.195) 0:00:08.923 ********* 2025-07-12 19:51:13.530302 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-12 19:51:13.530309 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-12 19:51:13.530316 | orchestrator | 2025-07-12 19:51:13.530324 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 19:51:13.530331 | orchestrator | Saturday 12 July 2025 19:51:08 +0000 (0:00:00.154) 0:00:09.079 ********* 2025-07-12 19:51:13.530338 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530345 | orchestrator | 2025-07-12 19:51:13.530353 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 19:51:13.530360 | orchestrator | Saturday 12 July 2025 19:51:08 +0000 (0:00:00.126) 0:00:09.205 ********* 2025-07-12 19:51:13.530367 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530374 | orchestrator | 2025-07-12 19:51:13.530381 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 19:51:13.530389 | orchestrator | Saturday 12 July 2025 19:51:08 +0000 (0:00:00.179) 0:00:09.384 ********* 2025-07-12 19:51:13.530396 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530420 | orchestrator | 2025-07-12 19:51:13.530427 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 19:51:13.530435 | orchestrator | Saturday 12 July 2025 19:51:08 +0000 (0:00:00.130) 0:00:09.515 ********* 2025-07-12 19:51:13.530442 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:51:13.530449 | orchestrator | 2025-07-12 19:51:13.530456 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 19:51:13.530464 | orchestrator | Saturday 12 July 2025 19:51:09 +0000 (0:00:00.137) 0:00:09.653 ********* 2025-07-12 19:51:13.530472 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd5945923-5bd4-5f45-a4a9-07ddacb4606e'}}) 2025-07-12 19:51:13.530479 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '661525d0-45b6-5e60-bde8-1fec1e4af76b'}}) 2025-07-12 19:51:13.530487 | orchestrator | 2025-07-12 19:51:13.530494 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 19:51:13.530501 | orchestrator | Saturday 12 July 2025 19:51:09 +0000 (0:00:00.184) 0:00:09.837 ********* 2025-07-12 19:51:13.530509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd5945923-5bd4-5f45-a4a9-07ddacb4606e'}})  2025-07-12 19:51:13.530521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '661525d0-45b6-5e60-bde8-1fec1e4af76b'}})  2025-07-12 19:51:13.530529 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530536 | orchestrator | 2025-07-12 19:51:13.530543 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 19:51:13.530551 | orchestrator | Saturday 12 July 2025 19:51:09 +0000 (0:00:00.137) 0:00:09.975 ********* 2025-07-12 19:51:13.530558 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd5945923-5bd4-5f45-a4a9-07ddacb4606e'}})  2025-07-12 19:51:13.530565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '661525d0-45b6-5e60-bde8-1fec1e4af76b'}})  2025-07-12 19:51:13.530572 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530579 | orchestrator | 2025-07-12 19:51:13.530587 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 19:51:13.530594 | orchestrator | Saturday 12 July 2025 19:51:09 +0000 (0:00:00.150) 0:00:10.125 ********* 2025-07-12 19:51:13.530601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd5945923-5bd4-5f45-a4a9-07ddacb4606e'}})  2025-07-12 19:51:13.530608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '661525d0-45b6-5e60-bde8-1fec1e4af76b'}})  2025-07-12 19:51:13.530616 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530623 | orchestrator | 2025-07-12 19:51:13.530642 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 19:51:13.530649 | orchestrator | Saturday 12 July 2025 19:51:09 +0000 (0:00:00.269) 0:00:10.395 ********* 2025-07-12 19:51:13.530657 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:51:13.530664 | orchestrator | 2025-07-12 19:51:13.530671 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 19:51:13.530679 | orchestrator | Saturday 12 July 2025 19:51:09 +0000 (0:00:00.148) 0:00:10.543 ********* 2025-07-12 19:51:13.530686 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:51:13.530693 | orchestrator | 2025-07-12 19:51:13.530701 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 19:51:13.530708 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.136) 0:00:10.680 ********* 2025-07-12 19:51:13.530715 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530722 | orchestrator | 2025-07-12 19:51:13.530729 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 19:51:13.530760 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.129) 0:00:10.810 ********* 2025-07-12 19:51:13.530767 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530775 | orchestrator | 2025-07-12 19:51:13.530787 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 19:51:13.530795 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.116) 0:00:10.927 ********* 2025-07-12 19:51:13.530802 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530809 | orchestrator | 2025-07-12 19:51:13.530816 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 19:51:13.530824 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.155) 0:00:11.082 ********* 2025-07-12 19:51:13.530831 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 19:51:13.530838 | orchestrator |  "ceph_osd_devices": { 2025-07-12 19:51:13.530846 | orchestrator |  "sdb": { 2025-07-12 19:51:13.530853 | orchestrator |  "osd_lvm_uuid": "d5945923-5bd4-5f45-a4a9-07ddacb4606e" 2025-07-12 19:51:13.530861 | orchestrator |  }, 2025-07-12 19:51:13.530868 | orchestrator |  "sdc": { 2025-07-12 19:51:13.530875 | orchestrator |  "osd_lvm_uuid": "661525d0-45b6-5e60-bde8-1fec1e4af76b" 2025-07-12 19:51:13.530882 | orchestrator |  } 2025-07-12 19:51:13.530890 | orchestrator |  } 2025-07-12 19:51:13.530897 | orchestrator | } 2025-07-12 19:51:13.530905 | orchestrator | 2025-07-12 19:51:13.530912 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 19:51:13.530919 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.120) 0:00:11.203 ********* 2025-07-12 19:51:13.530927 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530934 | orchestrator | 2025-07-12 19:51:13.530941 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 19:51:13.530948 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.130) 0:00:11.333 ********* 2025-07-12 19:51:13.530960 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530967 | orchestrator | 2025-07-12 19:51:13.530975 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 19:51:13.530982 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.120) 0:00:11.454 ********* 2025-07-12 19:51:13.530989 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:51:13.530996 | orchestrator | 2025-07-12 19:51:13.531003 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 19:51:13.531011 | orchestrator | Saturday 12 July 2025 19:51:10 +0000 (0:00:00.120) 0:00:11.574 ********* 2025-07-12 19:51:13.531018 | orchestrator | changed: [testbed-node-3] => { 2025-07-12 19:51:13.531025 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 19:51:13.531032 | orchestrator |  "ceph_osd_devices": { 2025-07-12 19:51:13.531039 | orchestrator |  "sdb": { 2025-07-12 19:51:13.531047 | orchestrator |  "osd_lvm_uuid": "d5945923-5bd4-5f45-a4a9-07ddacb4606e" 2025-07-12 19:51:13.531054 | orchestrator |  }, 2025-07-12 19:51:13.531061 | orchestrator |  "sdc": { 2025-07-12 19:51:13.531068 | orchestrator |  "osd_lvm_uuid": "661525d0-45b6-5e60-bde8-1fec1e4af76b" 2025-07-12 19:51:13.531076 | orchestrator |  } 2025-07-12 19:51:13.531083 | orchestrator |  }, 2025-07-12 19:51:13.531090 | orchestrator |  "lvm_volumes": [ 2025-07-12 19:51:13.531097 | orchestrator |  { 2025-07-12 19:51:13.531105 | orchestrator |  "data": "osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e", 2025-07-12 19:51:13.531112 | orchestrator |  "data_vg": "ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e" 2025-07-12 19:51:13.531119 | orchestrator |  }, 2025-07-12 19:51:13.531126 | orchestrator |  { 2025-07-12 19:51:13.531134 | orchestrator |  "data": "osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b", 2025-07-12 19:51:13.531141 | orchestrator |  "data_vg": "ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b" 2025-07-12 19:51:13.531148 | orchestrator |  } 2025-07-12 19:51:13.531155 | orchestrator |  ] 2025-07-12 19:51:13.531162 | orchestrator |  } 2025-07-12 19:51:13.531170 | orchestrator | } 2025-07-12 19:51:13.531177 | orchestrator | 2025-07-12 19:51:13.531184 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 19:51:13.531196 | orchestrator | Saturday 12 July 2025 19:51:11 +0000 (0:00:00.177) 0:00:11.751 ********* 2025-07-12 19:51:13.531204 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 19:51:13.531211 | orchestrator | 2025-07-12 19:51:13.531218 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 19:51:13.531225 | orchestrator | 2025-07-12 19:51:13.531233 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 19:51:13.531240 | orchestrator | Saturday 12 July 2025 19:51:13 +0000 (0:00:01.961) 0:00:13.712 ********* 2025-07-12 19:51:13.531247 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 19:51:13.531254 | orchestrator | 2025-07-12 19:51:13.531261 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 19:51:13.531269 | orchestrator | Saturday 12 July 2025 19:51:13 +0000 (0:00:00.227) 0:00:13.940 ********* 2025-07-12 19:51:13.531276 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:51:13.531283 | orchestrator | 2025-07-12 19:51:13.531290 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:13.531302 | orchestrator | Saturday 12 July 2025 19:51:13 +0000 (0:00:00.205) 0:00:14.146 ********* 2025-07-12 19:51:20.081438 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 19:51:20.081508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 19:51:20.081517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 19:51:20.081525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 19:51:20.081532 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 19:51:20.081538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 19:51:20.081545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 19:51:20.081552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 19:51:20.081559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 19:51:20.081566 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 19:51:20.081584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 19:51:20.081591 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 19:51:20.081598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 19:51:20.081608 | orchestrator | 2025-07-12 19:51:20.081616 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081624 | orchestrator | Saturday 12 July 2025 19:51:13 +0000 (0:00:00.303) 0:00:14.449 ********* 2025-07-12 19:51:20.081631 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081638 | orchestrator | 2025-07-12 19:51:20.081645 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081652 | orchestrator | Saturday 12 July 2025 19:51:14 +0000 (0:00:00.178) 0:00:14.628 ********* 2025-07-12 19:51:20.081658 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081665 | orchestrator | 2025-07-12 19:51:20.081672 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081678 | orchestrator | Saturday 12 July 2025 19:51:14 +0000 (0:00:00.172) 0:00:14.800 ********* 2025-07-12 19:51:20.081685 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081692 | orchestrator | 2025-07-12 19:51:20.081698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081705 | orchestrator | Saturday 12 July 2025 19:51:14 +0000 (0:00:00.172) 0:00:14.972 ********* 2025-07-12 19:51:20.081712 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081766 | orchestrator | 2025-07-12 19:51:20.081774 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081780 | orchestrator | Saturday 12 July 2025 19:51:14 +0000 (0:00:00.182) 0:00:15.154 ********* 2025-07-12 19:51:20.081787 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081794 | orchestrator | 2025-07-12 19:51:20.081800 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081807 | orchestrator | Saturday 12 July 2025 19:51:14 +0000 (0:00:00.182) 0:00:15.336 ********* 2025-07-12 19:51:20.081813 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081820 | orchestrator | 2025-07-12 19:51:20.081826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081833 | orchestrator | Saturday 12 July 2025 19:51:15 +0000 (0:00:00.420) 0:00:15.757 ********* 2025-07-12 19:51:20.081840 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081846 | orchestrator | 2025-07-12 19:51:20.081853 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081859 | orchestrator | Saturday 12 July 2025 19:51:15 +0000 (0:00:00.195) 0:00:15.952 ********* 2025-07-12 19:51:20.081866 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.081872 | orchestrator | 2025-07-12 19:51:20.081879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081886 | orchestrator | Saturday 12 July 2025 19:51:15 +0000 (0:00:00.180) 0:00:16.133 ********* 2025-07-12 19:51:20.081892 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9) 2025-07-12 19:51:20.081899 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9) 2025-07-12 19:51:20.081906 | orchestrator | 2025-07-12 19:51:20.081913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081919 | orchestrator | Saturday 12 July 2025 19:51:15 +0000 (0:00:00.368) 0:00:16.501 ********* 2025-07-12 19:51:20.081926 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7) 2025-07-12 19:51:20.081933 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7) 2025-07-12 19:51:20.081939 | orchestrator | 2025-07-12 19:51:20.081946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081952 | orchestrator | Saturday 12 July 2025 19:51:16 +0000 (0:00:00.391) 0:00:16.893 ********* 2025-07-12 19:51:20.081959 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350) 2025-07-12 19:51:20.081966 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350) 2025-07-12 19:51:20.081972 | orchestrator | 2025-07-12 19:51:20.081979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.081986 | orchestrator | Saturday 12 July 2025 19:51:16 +0000 (0:00:00.363) 0:00:17.257 ********* 2025-07-12 19:51:20.082002 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb) 2025-07-12 19:51:20.082010 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb) 2025-07-12 19:51:20.082051 | orchestrator | 2025-07-12 19:51:20.082059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:20.082067 | orchestrator | Saturday 12 July 2025 19:51:17 +0000 (0:00:00.373) 0:00:17.631 ********* 2025-07-12 19:51:20.082074 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 19:51:20.082082 | orchestrator | 2025-07-12 19:51:20.082090 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082101 | orchestrator | Saturday 12 July 2025 19:51:17 +0000 (0:00:00.293) 0:00:17.924 ********* 2025-07-12 19:51:20.082109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 19:51:20.082143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 19:51:20.082152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 19:51:20.082159 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 19:51:20.082167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 19:51:20.082175 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 19:51:20.082182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 19:51:20.082190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 19:51:20.082198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 19:51:20.082205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 19:51:20.082213 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 19:51:20.082220 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 19:51:20.082228 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 19:51:20.082236 | orchestrator | 2025-07-12 19:51:20.082243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082251 | orchestrator | Saturday 12 July 2025 19:51:17 +0000 (0:00:00.339) 0:00:18.263 ********* 2025-07-12 19:51:20.082258 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082266 | orchestrator | 2025-07-12 19:51:20.082274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082281 | orchestrator | Saturday 12 July 2025 19:51:17 +0000 (0:00:00.168) 0:00:18.432 ********* 2025-07-12 19:51:20.082289 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082297 | orchestrator | 2025-07-12 19:51:20.082304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082312 | orchestrator | Saturday 12 July 2025 19:51:18 +0000 (0:00:00.459) 0:00:18.891 ********* 2025-07-12 19:51:20.082319 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082327 | orchestrator | 2025-07-12 19:51:20.082334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082342 | orchestrator | Saturday 12 July 2025 19:51:18 +0000 (0:00:00.170) 0:00:19.062 ********* 2025-07-12 19:51:20.082350 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082358 | orchestrator | 2025-07-12 19:51:20.082365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082373 | orchestrator | Saturday 12 July 2025 19:51:18 +0000 (0:00:00.190) 0:00:19.252 ********* 2025-07-12 19:51:20.082381 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082388 | orchestrator | 2025-07-12 19:51:20.082396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082403 | orchestrator | Saturday 12 July 2025 19:51:18 +0000 (0:00:00.174) 0:00:19.426 ********* 2025-07-12 19:51:20.082409 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082416 | orchestrator | 2025-07-12 19:51:20.082422 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082429 | orchestrator | Saturday 12 July 2025 19:51:18 +0000 (0:00:00.178) 0:00:19.605 ********* 2025-07-12 19:51:20.082435 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082442 | orchestrator | 2025-07-12 19:51:20.082449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082455 | orchestrator | Saturday 12 July 2025 19:51:19 +0000 (0:00:00.161) 0:00:19.767 ********* 2025-07-12 19:51:20.082462 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082468 | orchestrator | 2025-07-12 19:51:20.082475 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082486 | orchestrator | Saturday 12 July 2025 19:51:19 +0000 (0:00:00.173) 0:00:19.941 ********* 2025-07-12 19:51:20.082493 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 19:51:20.082500 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 19:51:20.082507 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 19:51:20.082514 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 19:51:20.082520 | orchestrator | 2025-07-12 19:51:20.082527 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:20.082534 | orchestrator | Saturday 12 July 2025 19:51:19 +0000 (0:00:00.580) 0:00:20.521 ********* 2025-07-12 19:51:20.082540 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:20.082547 | orchestrator | 2025-07-12 19:51:20.082558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:26.011613 | orchestrator | Saturday 12 July 2025 19:51:20 +0000 (0:00:00.179) 0:00:20.700 ********* 2025-07-12 19:51:26.011815 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.011840 | orchestrator | 2025-07-12 19:51:26.011854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:26.011865 | orchestrator | Saturday 12 July 2025 19:51:20 +0000 (0:00:00.173) 0:00:20.873 ********* 2025-07-12 19:51:26.011876 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.011887 | orchestrator | 2025-07-12 19:51:26.011899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:26.011910 | orchestrator | Saturday 12 July 2025 19:51:20 +0000 (0:00:00.188) 0:00:21.062 ********* 2025-07-12 19:51:26.011921 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.011932 | orchestrator | 2025-07-12 19:51:26.011961 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 19:51:26.011973 | orchestrator | Saturday 12 July 2025 19:51:20 +0000 (0:00:00.179) 0:00:21.241 ********* 2025-07-12 19:51:26.011983 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-12 19:51:26.011994 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-12 19:51:26.012005 | orchestrator | 2025-07-12 19:51:26.012016 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 19:51:26.012027 | orchestrator | Saturday 12 July 2025 19:51:20 +0000 (0:00:00.257) 0:00:21.499 ********* 2025-07-12 19:51:26.012038 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012049 | orchestrator | 2025-07-12 19:51:26.012060 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 19:51:26.012071 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.128) 0:00:21.628 ********* 2025-07-12 19:51:26.012082 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012093 | orchestrator | 2025-07-12 19:51:26.012104 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 19:51:26.012115 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.133) 0:00:21.761 ********* 2025-07-12 19:51:26.012126 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012139 | orchestrator | 2025-07-12 19:51:26.012151 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 19:51:26.012163 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.115) 0:00:21.876 ********* 2025-07-12 19:51:26.012176 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:51:26.012189 | orchestrator | 2025-07-12 19:51:26.012202 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 19:51:26.012214 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.120) 0:00:21.997 ********* 2025-07-12 19:51:26.012227 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}}) 2025-07-12 19:51:26.012259 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f895b30-8de9-512a-b128-a5c9585d4791'}}) 2025-07-12 19:51:26.012272 | orchestrator | 2025-07-12 19:51:26.012284 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 19:51:26.012322 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.145) 0:00:22.142 ********* 2025-07-12 19:51:26.012336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}})  2025-07-12 19:51:26.012351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f895b30-8de9-512a-b128-a5c9585d4791'}})  2025-07-12 19:51:26.012363 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012376 | orchestrator | 2025-07-12 19:51:26.012389 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 19:51:26.012401 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.163) 0:00:22.305 ********* 2025-07-12 19:51:26.012412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}})  2025-07-12 19:51:26.012423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f895b30-8de9-512a-b128-a5c9585d4791'}})  2025-07-12 19:51:26.012434 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012445 | orchestrator | 2025-07-12 19:51:26.012456 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 19:51:26.012466 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.171) 0:00:22.477 ********* 2025-07-12 19:51:26.012477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}})  2025-07-12 19:51:26.012488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f895b30-8de9-512a-b128-a5c9585d4791'}})  2025-07-12 19:51:26.012500 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012511 | orchestrator | 2025-07-12 19:51:26.012522 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 19:51:26.012532 | orchestrator | Saturday 12 July 2025 19:51:21 +0000 (0:00:00.126) 0:00:22.603 ********* 2025-07-12 19:51:26.012543 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:51:26.012554 | orchestrator | 2025-07-12 19:51:26.012565 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 19:51:26.012576 | orchestrator | Saturday 12 July 2025 19:51:22 +0000 (0:00:00.133) 0:00:22.737 ********* 2025-07-12 19:51:26.012587 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:51:26.012598 | orchestrator | 2025-07-12 19:51:26.012609 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 19:51:26.012620 | orchestrator | Saturday 12 July 2025 19:51:22 +0000 (0:00:00.127) 0:00:22.864 ********* 2025-07-12 19:51:26.012630 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012641 | orchestrator | 2025-07-12 19:51:26.012672 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 19:51:26.012684 | orchestrator | Saturday 12 July 2025 19:51:22 +0000 (0:00:00.154) 0:00:23.019 ********* 2025-07-12 19:51:26.012695 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012706 | orchestrator | 2025-07-12 19:51:26.012716 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 19:51:26.012727 | orchestrator | Saturday 12 July 2025 19:51:22 +0000 (0:00:00.323) 0:00:23.342 ********* 2025-07-12 19:51:26.012768 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.012787 | orchestrator | 2025-07-12 19:51:26.012804 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 19:51:26.012821 | orchestrator | Saturday 12 July 2025 19:51:22 +0000 (0:00:00.112) 0:00:23.454 ********* 2025-07-12 19:51:26.012839 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 19:51:26.012866 | orchestrator |  "ceph_osd_devices": { 2025-07-12 19:51:26.012884 | orchestrator |  "sdb": { 2025-07-12 19:51:26.012901 | orchestrator |  "osd_lvm_uuid": "aa90e2bf-e75d-5c47-ae76-8a1384e00d58" 2025-07-12 19:51:26.012920 | orchestrator |  }, 2025-07-12 19:51:26.012937 | orchestrator |  "sdc": { 2025-07-12 19:51:26.012969 | orchestrator |  "osd_lvm_uuid": "2f895b30-8de9-512a-b128-a5c9585d4791" 2025-07-12 19:51:26.012988 | orchestrator |  } 2025-07-12 19:51:26.013005 | orchestrator |  } 2025-07-12 19:51:26.013024 | orchestrator | } 2025-07-12 19:51:26.013042 | orchestrator | 2025-07-12 19:51:26.013060 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 19:51:26.013079 | orchestrator | Saturday 12 July 2025 19:51:22 +0000 (0:00:00.127) 0:00:23.582 ********* 2025-07-12 19:51:26.013097 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.013117 | orchestrator | 2025-07-12 19:51:26.013139 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 19:51:26.013151 | orchestrator | Saturday 12 July 2025 19:51:23 +0000 (0:00:00.115) 0:00:23.697 ********* 2025-07-12 19:51:26.013162 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.013172 | orchestrator | 2025-07-12 19:51:26.013183 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 19:51:26.013194 | orchestrator | Saturday 12 July 2025 19:51:23 +0000 (0:00:00.125) 0:00:23.823 ********* 2025-07-12 19:51:26.013205 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:51:26.013216 | orchestrator | 2025-07-12 19:51:26.013226 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 19:51:26.013237 | orchestrator | Saturday 12 July 2025 19:51:23 +0000 (0:00:00.116) 0:00:23.939 ********* 2025-07-12 19:51:26.013248 | orchestrator | changed: [testbed-node-4] => { 2025-07-12 19:51:26.013258 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 19:51:26.013269 | orchestrator |  "ceph_osd_devices": { 2025-07-12 19:51:26.013280 | orchestrator |  "sdb": { 2025-07-12 19:51:26.013291 | orchestrator |  "osd_lvm_uuid": "aa90e2bf-e75d-5c47-ae76-8a1384e00d58" 2025-07-12 19:51:26.013318 | orchestrator |  }, 2025-07-12 19:51:26.013329 | orchestrator |  "sdc": { 2025-07-12 19:51:26.013340 | orchestrator |  "osd_lvm_uuid": "2f895b30-8de9-512a-b128-a5c9585d4791" 2025-07-12 19:51:26.013351 | orchestrator |  } 2025-07-12 19:51:26.013362 | orchestrator |  }, 2025-07-12 19:51:26.013372 | orchestrator |  "lvm_volumes": [ 2025-07-12 19:51:26.013383 | orchestrator |  { 2025-07-12 19:51:26.013394 | orchestrator |  "data": "osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58", 2025-07-12 19:51:26.013405 | orchestrator |  "data_vg": "ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58" 2025-07-12 19:51:26.013415 | orchestrator |  }, 2025-07-12 19:51:26.013426 | orchestrator |  { 2025-07-12 19:51:26.013437 | orchestrator |  "data": "osd-block-2f895b30-8de9-512a-b128-a5c9585d4791", 2025-07-12 19:51:26.013447 | orchestrator |  "data_vg": "ceph-2f895b30-8de9-512a-b128-a5c9585d4791" 2025-07-12 19:51:26.013548 | orchestrator |  } 2025-07-12 19:51:26.013564 | orchestrator |  ] 2025-07-12 19:51:26.013575 | orchestrator |  } 2025-07-12 19:51:26.013585 | orchestrator | } 2025-07-12 19:51:26.013596 | orchestrator | 2025-07-12 19:51:26.013607 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 19:51:26.013618 | orchestrator | Saturday 12 July 2025 19:51:23 +0000 (0:00:00.166) 0:00:24.106 ********* 2025-07-12 19:51:26.013628 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 19:51:26.013639 | orchestrator | 2025-07-12 19:51:26.013650 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 19:51:26.013661 | orchestrator | 2025-07-12 19:51:26.013672 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 19:51:26.013682 | orchestrator | Saturday 12 July 2025 19:51:24 +0000 (0:00:00.940) 0:00:25.047 ********* 2025-07-12 19:51:26.013693 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 19:51:26.013704 | orchestrator | 2025-07-12 19:51:26.013715 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 19:51:26.013725 | orchestrator | Saturday 12 July 2025 19:51:24 +0000 (0:00:00.503) 0:00:25.550 ********* 2025-07-12 19:51:26.013767 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:51:26.013779 | orchestrator | 2025-07-12 19:51:26.013790 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:26.013801 | orchestrator | Saturday 12 July 2025 19:51:25 +0000 (0:00:00.686) 0:00:26.236 ********* 2025-07-12 19:51:26.013812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 19:51:26.013822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 19:51:26.013833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 19:51:26.013844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 19:51:26.013855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 19:51:26.013865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 19:51:26.013890 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 19:51:32.917823 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 19:51:32.917902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 19:51:32.917913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 19:51:32.917921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 19:51:32.917928 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 19:51:32.917936 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 19:51:32.917944 | orchestrator | 2025-07-12 19:51:32.917952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.917960 | orchestrator | Saturday 12 July 2025 19:51:25 +0000 (0:00:00.388) 0:00:26.625 ********* 2025-07-12 19:51:32.917967 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.917975 | orchestrator | 2025-07-12 19:51:32.917982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.917990 | orchestrator | Saturday 12 July 2025 19:51:26 +0000 (0:00:00.214) 0:00:26.840 ********* 2025-07-12 19:51:32.917997 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918004 | orchestrator | 2025-07-12 19:51:32.918011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918053 | orchestrator | Saturday 12 July 2025 19:51:26 +0000 (0:00:00.208) 0:00:27.048 ********* 2025-07-12 19:51:32.918061 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918069 | orchestrator | 2025-07-12 19:51:32.918076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918083 | orchestrator | Saturday 12 July 2025 19:51:26 +0000 (0:00:00.214) 0:00:27.263 ********* 2025-07-12 19:51:32.918090 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918098 | orchestrator | 2025-07-12 19:51:32.918118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918126 | orchestrator | Saturday 12 July 2025 19:51:26 +0000 (0:00:00.200) 0:00:27.463 ********* 2025-07-12 19:51:32.918133 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918140 | orchestrator | 2025-07-12 19:51:32.918147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918155 | orchestrator | Saturday 12 July 2025 19:51:27 +0000 (0:00:00.205) 0:00:27.669 ********* 2025-07-12 19:51:32.918162 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918169 | orchestrator | 2025-07-12 19:51:32.918176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918184 | orchestrator | Saturday 12 July 2025 19:51:27 +0000 (0:00:00.204) 0:00:27.873 ********* 2025-07-12 19:51:32.918191 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918213 | orchestrator | 2025-07-12 19:51:32.918220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918228 | orchestrator | Saturday 12 July 2025 19:51:27 +0000 (0:00:00.208) 0:00:28.081 ********* 2025-07-12 19:51:32.918235 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918242 | orchestrator | 2025-07-12 19:51:32.918259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918266 | orchestrator | Saturday 12 July 2025 19:51:27 +0000 (0:00:00.187) 0:00:28.269 ********* 2025-07-12 19:51:32.918274 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909) 2025-07-12 19:51:32.918282 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909) 2025-07-12 19:51:32.918289 | orchestrator | 2025-07-12 19:51:32.918297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918304 | orchestrator | Saturday 12 July 2025 19:51:28 +0000 (0:00:00.716) 0:00:28.985 ********* 2025-07-12 19:51:32.918311 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8) 2025-07-12 19:51:32.918319 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8) 2025-07-12 19:51:32.918326 | orchestrator | 2025-07-12 19:51:32.918333 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918340 | orchestrator | Saturday 12 July 2025 19:51:29 +0000 (0:00:00.651) 0:00:29.636 ********* 2025-07-12 19:51:32.918348 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28) 2025-07-12 19:51:32.918355 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28) 2025-07-12 19:51:32.918364 | orchestrator | 2025-07-12 19:51:32.918372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918380 | orchestrator | Saturday 12 July 2025 19:51:29 +0000 (0:00:00.335) 0:00:29.972 ********* 2025-07-12 19:51:32.918388 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914) 2025-07-12 19:51:32.918396 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914) 2025-07-12 19:51:32.918404 | orchestrator | 2025-07-12 19:51:32.918412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:51:32.918420 | orchestrator | Saturday 12 July 2025 19:51:29 +0000 (0:00:00.326) 0:00:30.298 ********* 2025-07-12 19:51:32.918429 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 19:51:32.918437 | orchestrator | 2025-07-12 19:51:32.918445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918453 | orchestrator | Saturday 12 July 2025 19:51:29 +0000 (0:00:00.244) 0:00:30.542 ********* 2025-07-12 19:51:32.918473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 19:51:32.918482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 19:51:32.918490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 19:51:32.918499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 19:51:32.918507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 19:51:32.918514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 19:51:32.918522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 19:51:32.918531 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 19:51:32.918539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 19:51:32.918552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 19:51:32.918560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 19:51:32.918568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 19:51:32.918576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 19:51:32.918584 | orchestrator | 2025-07-12 19:51:32.918593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918600 | orchestrator | Saturday 12 July 2025 19:51:30 +0000 (0:00:00.282) 0:00:30.825 ********* 2025-07-12 19:51:32.918608 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918617 | orchestrator | 2025-07-12 19:51:32.918625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918632 | orchestrator | Saturday 12 July 2025 19:51:30 +0000 (0:00:00.169) 0:00:30.994 ********* 2025-07-12 19:51:32.918640 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918649 | orchestrator | 2025-07-12 19:51:32.918657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918665 | orchestrator | Saturday 12 July 2025 19:51:30 +0000 (0:00:00.165) 0:00:31.160 ********* 2025-07-12 19:51:32.918673 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918681 | orchestrator | 2025-07-12 19:51:32.918689 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918697 | orchestrator | Saturday 12 July 2025 19:51:30 +0000 (0:00:00.151) 0:00:31.312 ********* 2025-07-12 19:51:32.918705 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918713 | orchestrator | 2025-07-12 19:51:32.918722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918746 | orchestrator | Saturday 12 July 2025 19:51:30 +0000 (0:00:00.139) 0:00:31.451 ********* 2025-07-12 19:51:32.918756 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918763 | orchestrator | 2025-07-12 19:51:32.918771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918778 | orchestrator | Saturday 12 July 2025 19:51:30 +0000 (0:00:00.139) 0:00:31.591 ********* 2025-07-12 19:51:32.918785 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918793 | orchestrator | 2025-07-12 19:51:32.918800 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918807 | orchestrator | Saturday 12 July 2025 19:51:31 +0000 (0:00:00.418) 0:00:32.010 ********* 2025-07-12 19:51:32.918815 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918822 | orchestrator | 2025-07-12 19:51:32.918829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918836 | orchestrator | Saturday 12 July 2025 19:51:31 +0000 (0:00:00.153) 0:00:32.163 ********* 2025-07-12 19:51:32.918844 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918851 | orchestrator | 2025-07-12 19:51:32.918858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918866 | orchestrator | Saturday 12 July 2025 19:51:31 +0000 (0:00:00.168) 0:00:32.332 ********* 2025-07-12 19:51:32.918873 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 19:51:32.918880 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 19:51:32.918888 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 19:51:32.918895 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 19:51:32.918902 | orchestrator | 2025-07-12 19:51:32.918910 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918917 | orchestrator | Saturday 12 July 2025 19:51:32 +0000 (0:00:00.478) 0:00:32.810 ********* 2025-07-12 19:51:32.918924 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918931 | orchestrator | 2025-07-12 19:51:32.918939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918951 | orchestrator | Saturday 12 July 2025 19:51:32 +0000 (0:00:00.185) 0:00:32.996 ********* 2025-07-12 19:51:32.918958 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918965 | orchestrator | 2025-07-12 19:51:32.918973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.918980 | orchestrator | Saturday 12 July 2025 19:51:32 +0000 (0:00:00.186) 0:00:33.182 ********* 2025-07-12 19:51:32.918987 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.918995 | orchestrator | 2025-07-12 19:51:32.919002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:51:32.919009 | orchestrator | Saturday 12 July 2025 19:51:32 +0000 (0:00:00.172) 0:00:33.354 ********* 2025-07-12 19:51:32.919020 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:32.919028 | orchestrator | 2025-07-12 19:51:32.919035 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 19:51:32.919047 | orchestrator | Saturday 12 July 2025 19:51:32 +0000 (0:00:00.182) 0:00:33.536 ********* 2025-07-12 19:51:36.504517 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-12 19:51:36.504659 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-12 19:51:36.504673 | orchestrator | 2025-07-12 19:51:36.504682 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 19:51:36.504689 | orchestrator | Saturday 12 July 2025 19:51:33 +0000 (0:00:00.153) 0:00:33.690 ********* 2025-07-12 19:51:36.504697 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.504705 | orchestrator | 2025-07-12 19:51:36.504713 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 19:51:36.504720 | orchestrator | Saturday 12 July 2025 19:51:33 +0000 (0:00:00.150) 0:00:33.840 ********* 2025-07-12 19:51:36.504727 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.504766 | orchestrator | 2025-07-12 19:51:36.504774 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 19:51:36.504782 | orchestrator | Saturday 12 July 2025 19:51:33 +0000 (0:00:00.130) 0:00:33.971 ********* 2025-07-12 19:51:36.504789 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.504796 | orchestrator | 2025-07-12 19:51:36.504804 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 19:51:36.504811 | orchestrator | Saturday 12 July 2025 19:51:33 +0000 (0:00:00.147) 0:00:34.119 ********* 2025-07-12 19:51:36.504821 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:51:36.504833 | orchestrator | 2025-07-12 19:51:36.504841 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 19:51:36.504848 | orchestrator | Saturday 12 July 2025 19:51:33 +0000 (0:00:00.240) 0:00:34.359 ********* 2025-07-12 19:51:36.504856 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}}) 2025-07-12 19:51:36.504864 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71032f38-677b-542f-825f-c43a6d71b028'}}) 2025-07-12 19:51:36.504871 | orchestrator | 2025-07-12 19:51:36.504879 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 19:51:36.504886 | orchestrator | Saturday 12 July 2025 19:51:33 +0000 (0:00:00.161) 0:00:34.521 ********* 2025-07-12 19:51:36.504894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}})  2025-07-12 19:51:36.504902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71032f38-677b-542f-825f-c43a6d71b028'}})  2025-07-12 19:51:36.504909 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.504917 | orchestrator | 2025-07-12 19:51:36.504934 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 19:51:36.504942 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.132) 0:00:34.653 ********* 2025-07-12 19:51:36.504949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}})  2025-07-12 19:51:36.504971 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71032f38-677b-542f-825f-c43a6d71b028'}})  2025-07-12 19:51:36.504979 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.504986 | orchestrator | 2025-07-12 19:51:36.504994 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 19:51:36.505001 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.129) 0:00:34.782 ********* 2025-07-12 19:51:36.505008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}})  2025-07-12 19:51:36.505015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71032f38-677b-542f-825f-c43a6d71b028'}})  2025-07-12 19:51:36.505023 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505030 | orchestrator | 2025-07-12 19:51:36.505037 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 19:51:36.505044 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.126) 0:00:34.908 ********* 2025-07-12 19:51:36.505052 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:51:36.505059 | orchestrator | 2025-07-12 19:51:36.505066 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 19:51:36.505074 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.115) 0:00:35.024 ********* 2025-07-12 19:51:36.505081 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:51:36.505088 | orchestrator | 2025-07-12 19:51:36.505097 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 19:51:36.505105 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.130) 0:00:35.154 ********* 2025-07-12 19:51:36.505114 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505122 | orchestrator | 2025-07-12 19:51:36.505130 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 19:51:36.505138 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.116) 0:00:35.271 ********* 2025-07-12 19:51:36.505147 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505155 | orchestrator | 2025-07-12 19:51:36.505163 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 19:51:36.505171 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.121) 0:00:35.392 ********* 2025-07-12 19:51:36.505180 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505188 | orchestrator | 2025-07-12 19:51:36.505196 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 19:51:36.505204 | orchestrator | Saturday 12 July 2025 19:51:34 +0000 (0:00:00.117) 0:00:35.510 ********* 2025-07-12 19:51:36.505212 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 19:51:36.505221 | orchestrator |  "ceph_osd_devices": { 2025-07-12 19:51:36.505229 | orchestrator |  "sdb": { 2025-07-12 19:51:36.505237 | orchestrator |  "osd_lvm_uuid": "2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a" 2025-07-12 19:51:36.505258 | orchestrator |  }, 2025-07-12 19:51:36.505266 | orchestrator |  "sdc": { 2025-07-12 19:51:36.505275 | orchestrator |  "osd_lvm_uuid": "71032f38-677b-542f-825f-c43a6d71b028" 2025-07-12 19:51:36.505283 | orchestrator |  } 2025-07-12 19:51:36.505291 | orchestrator |  } 2025-07-12 19:51:36.505300 | orchestrator | } 2025-07-12 19:51:36.505308 | orchestrator | 2025-07-12 19:51:36.505316 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 19:51:36.505330 | orchestrator | Saturday 12 July 2025 19:51:35 +0000 (0:00:00.119) 0:00:35.629 ********* 2025-07-12 19:51:36.505339 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505347 | orchestrator | 2025-07-12 19:51:36.505355 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 19:51:36.505363 | orchestrator | Saturday 12 July 2025 19:51:35 +0000 (0:00:00.110) 0:00:35.740 ********* 2025-07-12 19:51:36.505372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505380 | orchestrator | 2025-07-12 19:51:36.505388 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 19:51:36.505401 | orchestrator | Saturday 12 July 2025 19:51:35 +0000 (0:00:00.250) 0:00:35.990 ********* 2025-07-12 19:51:36.505410 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:51:36.505418 | orchestrator | 2025-07-12 19:51:36.505426 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 19:51:36.505434 | orchestrator | Saturday 12 July 2025 19:51:35 +0000 (0:00:00.130) 0:00:36.121 ********* 2025-07-12 19:51:36.505442 | orchestrator | changed: [testbed-node-5] => { 2025-07-12 19:51:36.505450 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 19:51:36.505459 | orchestrator |  "ceph_osd_devices": { 2025-07-12 19:51:36.505467 | orchestrator |  "sdb": { 2025-07-12 19:51:36.505475 | orchestrator |  "osd_lvm_uuid": "2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a" 2025-07-12 19:51:36.505484 | orchestrator |  }, 2025-07-12 19:51:36.505491 | orchestrator |  "sdc": { 2025-07-12 19:51:36.505498 | orchestrator |  "osd_lvm_uuid": "71032f38-677b-542f-825f-c43a6d71b028" 2025-07-12 19:51:36.505505 | orchestrator |  } 2025-07-12 19:51:36.505513 | orchestrator |  }, 2025-07-12 19:51:36.505520 | orchestrator |  "lvm_volumes": [ 2025-07-12 19:51:36.505527 | orchestrator |  { 2025-07-12 19:51:36.505534 | orchestrator |  "data": "osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a", 2025-07-12 19:51:36.505541 | orchestrator |  "data_vg": "ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a" 2025-07-12 19:51:36.505549 | orchestrator |  }, 2025-07-12 19:51:36.505556 | orchestrator |  { 2025-07-12 19:51:36.505563 | orchestrator |  "data": "osd-block-71032f38-677b-542f-825f-c43a6d71b028", 2025-07-12 19:51:36.505571 | orchestrator |  "data_vg": "ceph-71032f38-677b-542f-825f-c43a6d71b028" 2025-07-12 19:51:36.505578 | orchestrator |  } 2025-07-12 19:51:36.505585 | orchestrator |  ] 2025-07-12 19:51:36.505592 | orchestrator |  } 2025-07-12 19:51:36.505603 | orchestrator | } 2025-07-12 19:51:36.505610 | orchestrator | 2025-07-12 19:51:36.505618 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 19:51:36.505625 | orchestrator | Saturday 12 July 2025 19:51:35 +0000 (0:00:00.182) 0:00:36.303 ********* 2025-07-12 19:51:36.505632 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 19:51:36.505640 | orchestrator | 2025-07-12 19:51:36.505647 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:51:36.505660 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 19:51:36.505668 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 19:51:36.505676 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 19:51:36.505683 | orchestrator | 2025-07-12 19:51:36.505691 | orchestrator | 2025-07-12 19:51:36.505698 | orchestrator | 2025-07-12 19:51:36.505705 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:51:36.505713 | orchestrator | Saturday 12 July 2025 19:51:36 +0000 (0:00:00.800) 0:00:37.103 ********* 2025-07-12 19:51:36.505722 | orchestrator | =============================================================================== 2025-07-12 19:51:36.505749 | orchestrator | Write configuration file ------------------------------------------------ 3.70s 2025-07-12 19:51:36.505756 | orchestrator | Get initial list of available block devices ----------------------------- 1.11s 2025-07-12 19:51:36.505764 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2025-07-12 19:51:36.505771 | orchestrator | Add known partitions to the list of available block devices ------------- 1.01s 2025-07-12 19:51:36.505778 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-07-12 19:51:36.505791 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.94s 2025-07-12 19:51:36.505798 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-07-12 19:51:36.505805 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-07-12 19:51:36.505813 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-07-12 19:51:36.505820 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-07-12 19:51:36.505827 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.57s 2025-07-12 19:51:36.505835 | orchestrator | Set WAL devices config data --------------------------------------------- 0.56s 2025-07-12 19:51:36.505842 | orchestrator | Print configuration data ------------------------------------------------ 0.53s 2025-07-12 19:51:36.505849 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.52s 2025-07-12 19:51:36.505862 | orchestrator | Add known links to the list of available block devices ------------------ 0.51s 2025-07-12 19:51:36.705431 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.50s 2025-07-12 19:51:36.705506 | orchestrator | Print DB devices -------------------------------------------------------- 0.50s 2025-07-12 19:51:36.705520 | orchestrator | Add known links to the list of available block devices ------------------ 0.50s 2025-07-12 19:51:36.705531 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.49s 2025-07-12 19:51:36.705543 | orchestrator | Add known partitions to the list of available block devices ------------- 0.48s 2025-07-12 19:51:59.053055 | orchestrator | 2025-07-12 19:51:59 | INFO  | Task 1c11dcb7-34c2-413e-a2ec-a8bc1eb8b305 (sync inventory) is running in background. Output coming soon. 2025-07-12 19:52:16.055694 | orchestrator | 2025-07-12 19:52:00 | INFO  | Starting group_vars file reorganization 2025-07-12 19:52:16.055883 | orchestrator | 2025-07-12 19:52:00 | INFO  | Moved 0 file(s) to their respective directories 2025-07-12 19:52:16.055901 | orchestrator | 2025-07-12 19:52:00 | INFO  | Group_vars file reorganization completed 2025-07-12 19:52:16.055913 | orchestrator | 2025-07-12 19:52:02 | INFO  | Starting variable preparation from inventory 2025-07-12 19:52:16.055925 | orchestrator | 2025-07-12 19:52:03 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-12 19:52:16.055937 | orchestrator | 2025-07-12 19:52:03 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-12 19:52:16.055948 | orchestrator | 2025-07-12 19:52:03 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-12 19:52:16.055959 | orchestrator | 2025-07-12 19:52:03 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-12 19:52:16.055970 | orchestrator | 2025-07-12 19:52:03 | INFO  | Variable preparation completed 2025-07-12 19:52:16.055981 | orchestrator | 2025-07-12 19:52:04 | INFO  | Starting inventory overwrite handling 2025-07-12 19:52:16.055993 | orchestrator | 2025-07-12 19:52:04 | INFO  | Handling group overwrites in 99-overwrite 2025-07-12 19:52:16.056004 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removing group frr:children from 60-generic 2025-07-12 19:52:16.056015 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removing group storage:children from 50-kolla 2025-07-12 19:52:16.056026 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-12 19:52:16.056037 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-12 19:52:16.056049 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-12 19:52:16.056060 | orchestrator | 2025-07-12 19:52:04 | INFO  | Handling group overwrites in 20-roles 2025-07-12 19:52:16.056071 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-12 19:52:16.056121 | orchestrator | 2025-07-12 19:52:04 | INFO  | Removed 6 group(s) in total 2025-07-12 19:52:16.056142 | orchestrator | 2025-07-12 19:52:04 | INFO  | Inventory overwrite handling completed 2025-07-12 19:52:16.056162 | orchestrator | 2025-07-12 19:52:04 | INFO  | Starting merge of inventory files 2025-07-12 19:52:16.056182 | orchestrator | 2025-07-12 19:52:04 | INFO  | Inventory files merged successfully 2025-07-12 19:52:16.056203 | orchestrator | 2025-07-12 19:52:08 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-12 19:52:16.056222 | orchestrator | 2025-07-12 19:52:14 | INFO  | Successfully wrote ClusterShell configuration 2025-07-12 19:52:16.056239 | orchestrator | [master 14f8fab] 2025-07-12-19-52 2025-07-12 19:52:16.056253 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-12 19:52:18.080856 | orchestrator | 2025-07-12 19:52:18 | INFO  | Task 714a5942-c77f-4dc2-b3a4-36423f37e13b (ceph-create-lvm-devices) was prepared for execution. 2025-07-12 19:52:18.080967 | orchestrator | 2025-07-12 19:52:18 | INFO  | It takes a moment until task 714a5942-c77f-4dc2-b3a4-36423f37e13b (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-12 19:52:27.591701 | orchestrator | 2025-07-12 19:52:27.591825 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 19:52:27.591841 | orchestrator | 2025-07-12 19:52:27.591853 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 19:52:27.591865 | orchestrator | Saturday 12 July 2025 19:52:21 +0000 (0:00:00.225) 0:00:00.225 ********* 2025-07-12 19:52:27.591876 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 19:52:27.591888 | orchestrator | 2025-07-12 19:52:27.591899 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 19:52:27.591910 | orchestrator | Saturday 12 July 2025 19:52:21 +0000 (0:00:00.173) 0:00:00.399 ********* 2025-07-12 19:52:27.591921 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:27.591932 | orchestrator | 2025-07-12 19:52:27.591943 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.591954 | orchestrator | Saturday 12 July 2025 19:52:21 +0000 (0:00:00.157) 0:00:00.556 ********* 2025-07-12 19:52:27.591965 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 19:52:27.591990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 19:52:27.592002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 19:52:27.592013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 19:52:27.592024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 19:52:27.592034 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 19:52:27.592045 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 19:52:27.592056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 19:52:27.592067 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 19:52:27.592077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 19:52:27.592088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 19:52:27.592099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 19:52:27.592109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 19:52:27.592120 | orchestrator | 2025-07-12 19:52:27.592131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592164 | orchestrator | Saturday 12 July 2025 19:52:21 +0000 (0:00:00.330) 0:00:00.887 ********* 2025-07-12 19:52:27.592175 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592186 | orchestrator | 2025-07-12 19:52:27.592197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592221 | orchestrator | Saturday 12 July 2025 19:52:22 +0000 (0:00:00.295) 0:00:01.182 ********* 2025-07-12 19:52:27.592232 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592243 | orchestrator | 2025-07-12 19:52:27.592254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592265 | orchestrator | Saturday 12 July 2025 19:52:22 +0000 (0:00:00.188) 0:00:01.371 ********* 2025-07-12 19:52:27.592280 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592291 | orchestrator | 2025-07-12 19:52:27.592302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592313 | orchestrator | Saturday 12 July 2025 19:52:22 +0000 (0:00:00.164) 0:00:01.535 ********* 2025-07-12 19:52:27.592324 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592334 | orchestrator | 2025-07-12 19:52:27.592345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592356 | orchestrator | Saturday 12 July 2025 19:52:22 +0000 (0:00:00.177) 0:00:01.713 ********* 2025-07-12 19:52:27.592367 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592378 | orchestrator | 2025-07-12 19:52:27.592388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592399 | orchestrator | Saturday 12 July 2025 19:52:23 +0000 (0:00:00.192) 0:00:01.905 ********* 2025-07-12 19:52:27.592410 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592421 | orchestrator | 2025-07-12 19:52:27.592432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592442 | orchestrator | Saturday 12 July 2025 19:52:23 +0000 (0:00:00.180) 0:00:02.086 ********* 2025-07-12 19:52:27.592453 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592464 | orchestrator | 2025-07-12 19:52:27.592475 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592486 | orchestrator | Saturday 12 July 2025 19:52:23 +0000 (0:00:00.183) 0:00:02.269 ********* 2025-07-12 19:52:27.592496 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.592507 | orchestrator | 2025-07-12 19:52:27.592518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592529 | orchestrator | Saturday 12 July 2025 19:52:23 +0000 (0:00:00.148) 0:00:02.418 ********* 2025-07-12 19:52:27.592539 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518) 2025-07-12 19:52:27.592551 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518) 2025-07-12 19:52:27.592562 | orchestrator | 2025-07-12 19:52:27.592573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592584 | orchestrator | Saturday 12 July 2025 19:52:23 +0000 (0:00:00.330) 0:00:02.749 ********* 2025-07-12 19:52:27.592610 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9) 2025-07-12 19:52:27.592622 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9) 2025-07-12 19:52:27.592633 | orchestrator | 2025-07-12 19:52:27.592644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592655 | orchestrator | Saturday 12 July 2025 19:52:24 +0000 (0:00:00.361) 0:00:03.111 ********* 2025-07-12 19:52:27.592666 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94) 2025-07-12 19:52:27.592677 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94) 2025-07-12 19:52:27.592688 | orchestrator | 2025-07-12 19:52:27.592699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592717 | orchestrator | Saturday 12 July 2025 19:52:24 +0000 (0:00:00.451) 0:00:03.562 ********* 2025-07-12 19:52:27.592753 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418) 2025-07-12 19:52:27.592764 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418) 2025-07-12 19:52:27.592775 | orchestrator | 2025-07-12 19:52:27.592786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:27.592797 | orchestrator | Saturday 12 July 2025 19:52:25 +0000 (0:00:00.516) 0:00:04.079 ********* 2025-07-12 19:52:27.592807 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 19:52:27.592818 | orchestrator | 2025-07-12 19:52:27.592829 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.592839 | orchestrator | Saturday 12 July 2025 19:52:25 +0000 (0:00:00.487) 0:00:04.567 ********* 2025-07-12 19:52:27.592850 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 19:52:27.592861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 19:52:27.592871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 19:52:27.592882 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 19:52:27.592893 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 19:52:27.592903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 19:52:27.592914 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 19:52:27.592924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 19:52:27.592935 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 19:52:27.592946 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 19:52:27.592956 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 19:52:27.592967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 19:52:27.592977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 19:52:27.592988 | orchestrator | 2025-07-12 19:52:27.592999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593010 | orchestrator | Saturday 12 July 2025 19:52:26 +0000 (0:00:00.377) 0:00:04.945 ********* 2025-07-12 19:52:27.593020 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593031 | orchestrator | 2025-07-12 19:52:27.593042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593053 | orchestrator | Saturday 12 July 2025 19:52:26 +0000 (0:00:00.184) 0:00:05.129 ********* 2025-07-12 19:52:27.593063 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593074 | orchestrator | 2025-07-12 19:52:27.593085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593095 | orchestrator | Saturday 12 July 2025 19:52:26 +0000 (0:00:00.183) 0:00:05.313 ********* 2025-07-12 19:52:27.593106 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593117 | orchestrator | 2025-07-12 19:52:27.593127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593138 | orchestrator | Saturday 12 July 2025 19:52:26 +0000 (0:00:00.244) 0:00:05.558 ********* 2025-07-12 19:52:27.593149 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593159 | orchestrator | 2025-07-12 19:52:27.593170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593188 | orchestrator | Saturday 12 July 2025 19:52:26 +0000 (0:00:00.161) 0:00:05.719 ********* 2025-07-12 19:52:27.593199 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593210 | orchestrator | 2025-07-12 19:52:27.593221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593232 | orchestrator | Saturday 12 July 2025 19:52:27 +0000 (0:00:00.202) 0:00:05.922 ********* 2025-07-12 19:52:27.593242 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593253 | orchestrator | 2025-07-12 19:52:27.593263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593274 | orchestrator | Saturday 12 July 2025 19:52:27 +0000 (0:00:00.195) 0:00:06.117 ********* 2025-07-12 19:52:27.593285 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:27.593295 | orchestrator | 2025-07-12 19:52:27.593306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:27.593317 | orchestrator | Saturday 12 July 2025 19:52:27 +0000 (0:00:00.192) 0:00:06.310 ********* 2025-07-12 19:52:27.593334 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.119716 | orchestrator | 2025-07-12 19:52:35.119884 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:35.119903 | orchestrator | Saturday 12 July 2025 19:52:27 +0000 (0:00:00.178) 0:00:06.488 ********* 2025-07-12 19:52:35.119915 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 19:52:35.119928 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-12 19:52:35.119939 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-12 19:52:35.119950 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-12 19:52:35.119961 | orchestrator | 2025-07-12 19:52:35.119973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:35.119984 | orchestrator | Saturday 12 July 2025 19:52:28 +0000 (0:00:00.832) 0:00:07.320 ********* 2025-07-12 19:52:35.119996 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120014 | orchestrator | 2025-07-12 19:52:35.120033 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:35.120051 | orchestrator | Saturday 12 July 2025 19:52:28 +0000 (0:00:00.197) 0:00:07.518 ********* 2025-07-12 19:52:35.120068 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120087 | orchestrator | 2025-07-12 19:52:35.120106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:35.120125 | orchestrator | Saturday 12 July 2025 19:52:28 +0000 (0:00:00.193) 0:00:07.711 ********* 2025-07-12 19:52:35.120143 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120163 | orchestrator | 2025-07-12 19:52:35.120176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:35.120187 | orchestrator | Saturday 12 July 2025 19:52:28 +0000 (0:00:00.188) 0:00:07.900 ********* 2025-07-12 19:52:35.120198 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120209 | orchestrator | 2025-07-12 19:52:35.120220 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 19:52:35.120231 | orchestrator | Saturday 12 July 2025 19:52:29 +0000 (0:00:00.198) 0:00:08.099 ********* 2025-07-12 19:52:35.120242 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120254 | orchestrator | 2025-07-12 19:52:35.120267 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 19:52:35.120279 | orchestrator | Saturday 12 July 2025 19:52:29 +0000 (0:00:00.116) 0:00:08.215 ********* 2025-07-12 19:52:35.120291 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'd5945923-5bd4-5f45-a4a9-07ddacb4606e'}}) 2025-07-12 19:52:35.120305 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '661525d0-45b6-5e60-bde8-1fec1e4af76b'}}) 2025-07-12 19:52:35.120317 | orchestrator | 2025-07-12 19:52:35.120330 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 19:52:35.120342 | orchestrator | Saturday 12 July 2025 19:52:29 +0000 (0:00:00.166) 0:00:08.382 ********* 2025-07-12 19:52:35.120356 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'}) 2025-07-12 19:52:35.120388 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'}) 2025-07-12 19:52:35.120401 | orchestrator | 2025-07-12 19:52:35.120430 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 19:52:35.120448 | orchestrator | Saturday 12 July 2025 19:52:31 +0000 (0:00:01.978) 0:00:10.360 ********* 2025-07-12 19:52:35.120461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.120475 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.120487 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120500 | orchestrator | 2025-07-12 19:52:35.120512 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 19:52:35.120525 | orchestrator | Saturday 12 July 2025 19:52:31 +0000 (0:00:00.137) 0:00:10.498 ********* 2025-07-12 19:52:35.120537 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'}) 2025-07-12 19:52:35.120550 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'}) 2025-07-12 19:52:35.120562 | orchestrator | 2025-07-12 19:52:35.120574 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 19:52:35.120586 | orchestrator | Saturday 12 July 2025 19:52:33 +0000 (0:00:01.442) 0:00:11.941 ********* 2025-07-12 19:52:35.120599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.120611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.120621 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120633 | orchestrator | 2025-07-12 19:52:35.120649 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 19:52:35.120666 | orchestrator | Saturday 12 July 2025 19:52:33 +0000 (0:00:00.132) 0:00:12.073 ********* 2025-07-12 19:52:35.120683 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120701 | orchestrator | 2025-07-12 19:52:35.120746 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 19:52:35.120790 | orchestrator | Saturday 12 July 2025 19:52:33 +0000 (0:00:00.138) 0:00:12.212 ********* 2025-07-12 19:52:35.120810 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.120830 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.120848 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120867 | orchestrator | 2025-07-12 19:52:35.120879 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 19:52:35.120889 | orchestrator | Saturday 12 July 2025 19:52:33 +0000 (0:00:00.347) 0:00:12.559 ********* 2025-07-12 19:52:35.120900 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120911 | orchestrator | 2025-07-12 19:52:35.120922 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 19:52:35.120932 | orchestrator | Saturday 12 July 2025 19:52:33 +0000 (0:00:00.132) 0:00:12.692 ********* 2025-07-12 19:52:35.120943 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.120965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.120976 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.120987 | orchestrator | 2025-07-12 19:52:35.120998 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 19:52:35.121009 | orchestrator | Saturday 12 July 2025 19:52:33 +0000 (0:00:00.149) 0:00:12.841 ********* 2025-07-12 19:52:35.121020 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121031 | orchestrator | 2025-07-12 19:52:35.121042 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 19:52:35.121053 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.137) 0:00:12.979 ********* 2025-07-12 19:52:35.121064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.121080 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.121098 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121117 | orchestrator | 2025-07-12 19:52:35.121137 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 19:52:35.121156 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.156) 0:00:13.135 ********* 2025-07-12 19:52:35.121175 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:35.121187 | orchestrator | 2025-07-12 19:52:35.121198 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 19:52:35.121209 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.142) 0:00:13.278 ********* 2025-07-12 19:52:35.121228 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.121239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.121250 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121261 | orchestrator | 2025-07-12 19:52:35.121272 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 19:52:35.121283 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.138) 0:00:13.416 ********* 2025-07-12 19:52:35.121294 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.121305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.121316 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121327 | orchestrator | 2025-07-12 19:52:35.121338 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 19:52:35.121349 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.170) 0:00:13.586 ********* 2025-07-12 19:52:35.121425 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:35.121437 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:35.121448 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121459 | orchestrator | 2025-07-12 19:52:35.121469 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 19:52:35.121480 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.151) 0:00:13.737 ********* 2025-07-12 19:52:35.121491 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121510 | orchestrator | 2025-07-12 19:52:35.121521 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 19:52:35.121533 | orchestrator | Saturday 12 July 2025 19:52:34 +0000 (0:00:00.142) 0:00:13.880 ********* 2025-07-12 19:52:35.121543 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:35.121554 | orchestrator | 2025-07-12 19:52:35.121574 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 19:52:41.513926 | orchestrator | Saturday 12 July 2025 19:52:35 +0000 (0:00:00.131) 0:00:14.011 ********* 2025-07-12 19:52:41.514107 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514127 | orchestrator | 2025-07-12 19:52:41.514141 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 19:52:41.514153 | orchestrator | Saturday 12 July 2025 19:52:35 +0000 (0:00:00.150) 0:00:14.162 ********* 2025-07-12 19:52:41.514165 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 19:52:41.514177 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 19:52:41.514188 | orchestrator | } 2025-07-12 19:52:41.514200 | orchestrator | 2025-07-12 19:52:41.514211 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 19:52:41.514222 | orchestrator | Saturday 12 July 2025 19:52:35 +0000 (0:00:00.337) 0:00:14.499 ********* 2025-07-12 19:52:41.514233 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 19:52:41.514244 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 19:52:41.514255 | orchestrator | } 2025-07-12 19:52:41.514266 | orchestrator | 2025-07-12 19:52:41.514277 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 19:52:41.514288 | orchestrator | Saturday 12 July 2025 19:52:35 +0000 (0:00:00.134) 0:00:14.634 ********* 2025-07-12 19:52:41.514298 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 19:52:41.514309 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 19:52:41.514321 | orchestrator | } 2025-07-12 19:52:41.514333 | orchestrator | 2025-07-12 19:52:41.514344 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 19:52:41.514355 | orchestrator | Saturday 12 July 2025 19:52:35 +0000 (0:00:00.143) 0:00:14.777 ********* 2025-07-12 19:52:41.514366 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:41.514377 | orchestrator | 2025-07-12 19:52:41.514388 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 19:52:41.514399 | orchestrator | Saturday 12 July 2025 19:52:36 +0000 (0:00:00.659) 0:00:15.437 ********* 2025-07-12 19:52:41.514410 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:41.514421 | orchestrator | 2025-07-12 19:52:41.514432 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 19:52:41.514442 | orchestrator | Saturday 12 July 2025 19:52:37 +0000 (0:00:00.532) 0:00:15.970 ********* 2025-07-12 19:52:41.514454 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:41.514466 | orchestrator | 2025-07-12 19:52:41.514478 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 19:52:41.514490 | orchestrator | Saturday 12 July 2025 19:52:37 +0000 (0:00:00.507) 0:00:16.478 ********* 2025-07-12 19:52:41.514502 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:41.514514 | orchestrator | 2025-07-12 19:52:41.514527 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 19:52:41.514539 | orchestrator | Saturday 12 July 2025 19:52:37 +0000 (0:00:00.137) 0:00:16.615 ********* 2025-07-12 19:52:41.514551 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514563 | orchestrator | 2025-07-12 19:52:41.514575 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 19:52:41.514588 | orchestrator | Saturday 12 July 2025 19:52:37 +0000 (0:00:00.112) 0:00:16.727 ********* 2025-07-12 19:52:41.514600 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514612 | orchestrator | 2025-07-12 19:52:41.514624 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 19:52:41.514635 | orchestrator | Saturday 12 July 2025 19:52:37 +0000 (0:00:00.108) 0:00:16.836 ********* 2025-07-12 19:52:41.514668 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 19:52:41.514680 | orchestrator |  "vgs_report": { 2025-07-12 19:52:41.514691 | orchestrator |  "vg": [] 2025-07-12 19:52:41.514702 | orchestrator |  } 2025-07-12 19:52:41.514713 | orchestrator | } 2025-07-12 19:52:41.514746 | orchestrator | 2025-07-12 19:52:41.514757 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 19:52:41.514768 | orchestrator | Saturday 12 July 2025 19:52:38 +0000 (0:00:00.137) 0:00:16.974 ********* 2025-07-12 19:52:41.514778 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514789 | orchestrator | 2025-07-12 19:52:41.514800 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 19:52:41.514810 | orchestrator | Saturday 12 July 2025 19:52:38 +0000 (0:00:00.136) 0:00:17.110 ********* 2025-07-12 19:52:41.514821 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514858 | orchestrator | 2025-07-12 19:52:41.514869 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 19:52:41.514880 | orchestrator | Saturday 12 July 2025 19:52:38 +0000 (0:00:00.138) 0:00:17.249 ********* 2025-07-12 19:52:41.514891 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514901 | orchestrator | 2025-07-12 19:52:41.514912 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 19:52:41.514923 | orchestrator | Saturday 12 July 2025 19:52:38 +0000 (0:00:00.317) 0:00:17.566 ********* 2025-07-12 19:52:41.514933 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514944 | orchestrator | 2025-07-12 19:52:41.514955 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 19:52:41.514965 | orchestrator | Saturday 12 July 2025 19:52:38 +0000 (0:00:00.134) 0:00:17.701 ********* 2025-07-12 19:52:41.514976 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.514987 | orchestrator | 2025-07-12 19:52:41.515015 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 19:52:41.515027 | orchestrator | Saturday 12 July 2025 19:52:38 +0000 (0:00:00.151) 0:00:17.852 ********* 2025-07-12 19:52:41.515038 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515048 | orchestrator | 2025-07-12 19:52:41.515059 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 19:52:41.515070 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.134) 0:00:17.987 ********* 2025-07-12 19:52:41.515081 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515092 | orchestrator | 2025-07-12 19:52:41.515102 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 19:52:41.515113 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.135) 0:00:18.122 ********* 2025-07-12 19:52:41.515124 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515135 | orchestrator | 2025-07-12 19:52:41.515146 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 19:52:41.515174 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.137) 0:00:18.260 ********* 2025-07-12 19:52:41.515186 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515197 | orchestrator | 2025-07-12 19:52:41.515208 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 19:52:41.515218 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.143) 0:00:18.403 ********* 2025-07-12 19:52:41.515229 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515240 | orchestrator | 2025-07-12 19:52:41.515251 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 19:52:41.515261 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.148) 0:00:18.552 ********* 2025-07-12 19:52:41.515272 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515283 | orchestrator | 2025-07-12 19:52:41.515293 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 19:52:41.515304 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.153) 0:00:18.705 ********* 2025-07-12 19:52:41.515315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515326 | orchestrator | 2025-07-12 19:52:41.515346 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 19:52:41.515357 | orchestrator | Saturday 12 July 2025 19:52:39 +0000 (0:00:00.142) 0:00:18.848 ********* 2025-07-12 19:52:41.515367 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515378 | orchestrator | 2025-07-12 19:52:41.515389 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 19:52:41.515400 | orchestrator | Saturday 12 July 2025 19:52:40 +0000 (0:00:00.150) 0:00:18.998 ********* 2025-07-12 19:52:41.515411 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515422 | orchestrator | 2025-07-12 19:52:41.515433 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 19:52:41.515444 | orchestrator | Saturday 12 July 2025 19:52:40 +0000 (0:00:00.133) 0:00:19.132 ********* 2025-07-12 19:52:41.515456 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:41.515468 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:41.515479 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515490 | orchestrator | 2025-07-12 19:52:41.515501 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 19:52:41.515512 | orchestrator | Saturday 12 July 2025 19:52:40 +0000 (0:00:00.159) 0:00:19.292 ********* 2025-07-12 19:52:41.515523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:41.515534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:41.515545 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515556 | orchestrator | 2025-07-12 19:52:41.515567 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 19:52:41.515578 | orchestrator | Saturday 12 July 2025 19:52:40 +0000 (0:00:00.457) 0:00:19.750 ********* 2025-07-12 19:52:41.515594 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:41.515606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:41.515616 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515627 | orchestrator | 2025-07-12 19:52:41.515638 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 19:52:41.515649 | orchestrator | Saturday 12 July 2025 19:52:40 +0000 (0:00:00.151) 0:00:19.901 ********* 2025-07-12 19:52:41.515659 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:41.515670 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:41.515681 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515692 | orchestrator | 2025-07-12 19:52:41.515702 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 19:52:41.515713 | orchestrator | Saturday 12 July 2025 19:52:41 +0000 (0:00:00.195) 0:00:20.096 ********* 2025-07-12 19:52:41.515745 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:41.515764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:41.515782 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:41.515813 | orchestrator | 2025-07-12 19:52:41.515833 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 19:52:41.515851 | orchestrator | Saturday 12 July 2025 19:52:41 +0000 (0:00:00.166) 0:00:20.262 ********* 2025-07-12 19:52:41.515867 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:41.515886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:47.233422 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:47.233481 | orchestrator | 2025-07-12 19:52:47.233487 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 19:52:47.233494 | orchestrator | Saturday 12 July 2025 19:52:41 +0000 (0:00:00.138) 0:00:20.401 ********* 2025-07-12 19:52:47.233499 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:47.233505 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:47.233510 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:47.233515 | orchestrator | 2025-07-12 19:52:47.233520 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 19:52:47.233525 | orchestrator | Saturday 12 July 2025 19:52:41 +0000 (0:00:00.147) 0:00:20.548 ********* 2025-07-12 19:52:47.233529 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:47.233534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:47.233539 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:47.233544 | orchestrator | 2025-07-12 19:52:47.233549 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 19:52:47.233553 | orchestrator | Saturday 12 July 2025 19:52:41 +0000 (0:00:00.163) 0:00:20.711 ********* 2025-07-12 19:52:47.233558 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:47.233564 | orchestrator | 2025-07-12 19:52:47.233568 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 19:52:47.233573 | orchestrator | Saturday 12 July 2025 19:52:42 +0000 (0:00:00.518) 0:00:21.230 ********* 2025-07-12 19:52:47.233577 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:47.233582 | orchestrator | 2025-07-12 19:52:47.233586 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 19:52:47.233591 | orchestrator | Saturday 12 July 2025 19:52:42 +0000 (0:00:00.506) 0:00:21.737 ********* 2025-07-12 19:52:47.233596 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:52:47.233600 | orchestrator | 2025-07-12 19:52:47.233605 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 19:52:47.233609 | orchestrator | Saturday 12 July 2025 19:52:42 +0000 (0:00:00.147) 0:00:21.884 ********* 2025-07-12 19:52:47.233614 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'vg_name': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'}) 2025-07-12 19:52:47.233620 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'vg_name': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'}) 2025-07-12 19:52:47.233625 | orchestrator | 2025-07-12 19:52:47.233630 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 19:52:47.233634 | orchestrator | Saturday 12 July 2025 19:52:43 +0000 (0:00:00.155) 0:00:22.039 ********* 2025-07-12 19:52:47.233639 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:47.233657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:47.233662 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:47.233667 | orchestrator | 2025-07-12 19:52:47.233671 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 19:52:47.233676 | orchestrator | Saturday 12 July 2025 19:52:43 +0000 (0:00:00.208) 0:00:22.247 ********* 2025-07-12 19:52:47.233680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:47.233685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:47.233690 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:47.233694 | orchestrator | 2025-07-12 19:52:47.233699 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 19:52:47.233703 | orchestrator | Saturday 12 July 2025 19:52:43 +0000 (0:00:00.359) 0:00:22.607 ********* 2025-07-12 19:52:47.233708 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'})  2025-07-12 19:52:47.233713 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'})  2025-07-12 19:52:47.233752 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:52:47.233757 | orchestrator | 2025-07-12 19:52:47.233761 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 19:52:47.233766 | orchestrator | Saturday 12 July 2025 19:52:43 +0000 (0:00:00.166) 0:00:22.774 ********* 2025-07-12 19:52:47.233770 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 19:52:47.233775 | orchestrator |  "lvm_report": { 2025-07-12 19:52:47.233780 | orchestrator |  "lv": [ 2025-07-12 19:52:47.233785 | orchestrator |  { 2025-07-12 19:52:47.233798 | orchestrator |  "lv_name": "osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b", 2025-07-12 19:52:47.233804 | orchestrator |  "vg_name": "ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b" 2025-07-12 19:52:47.233808 | orchestrator |  }, 2025-07-12 19:52:47.233813 | orchestrator |  { 2025-07-12 19:52:47.233818 | orchestrator |  "lv_name": "osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e", 2025-07-12 19:52:47.233822 | orchestrator |  "vg_name": "ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e" 2025-07-12 19:52:47.233827 | orchestrator |  } 2025-07-12 19:52:47.233831 | orchestrator |  ], 2025-07-12 19:52:47.233836 | orchestrator |  "pv": [ 2025-07-12 19:52:47.233841 | orchestrator |  { 2025-07-12 19:52:47.233845 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 19:52:47.233850 | orchestrator |  "vg_name": "ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e" 2025-07-12 19:52:47.233854 | orchestrator |  }, 2025-07-12 19:52:47.233859 | orchestrator |  { 2025-07-12 19:52:47.233864 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 19:52:47.233868 | orchestrator |  "vg_name": "ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b" 2025-07-12 19:52:47.233873 | orchestrator |  } 2025-07-12 19:52:47.233877 | orchestrator |  ] 2025-07-12 19:52:47.233882 | orchestrator |  } 2025-07-12 19:52:47.233887 | orchestrator | } 2025-07-12 19:52:47.233892 | orchestrator | 2025-07-12 19:52:47.233896 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 19:52:47.233901 | orchestrator | 2025-07-12 19:52:47.233905 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 19:52:47.233910 | orchestrator | Saturday 12 July 2025 19:52:44 +0000 (0:00:00.296) 0:00:23.071 ********* 2025-07-12 19:52:47.233915 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 19:52:47.233924 | orchestrator | 2025-07-12 19:52:47.233928 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 19:52:47.233933 | orchestrator | Saturday 12 July 2025 19:52:44 +0000 (0:00:00.285) 0:00:23.356 ********* 2025-07-12 19:52:47.233938 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:52:47.233942 | orchestrator | 2025-07-12 19:52:47.233947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.233951 | orchestrator | Saturday 12 July 2025 19:52:44 +0000 (0:00:00.260) 0:00:23.617 ********* 2025-07-12 19:52:47.233967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 19:52:47.233972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 19:52:47.233977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 19:52:47.233981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 19:52:47.233986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 19:52:47.233990 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 19:52:47.233995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 19:52:47.234002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 19:52:47.234008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 19:52:47.234013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 19:52:47.234044 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 19:52:47.234049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 19:52:47.234054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 19:52:47.234059 | orchestrator | 2025-07-12 19:52:47.234064 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234070 | orchestrator | Saturday 12 July 2025 19:52:45 +0000 (0:00:00.425) 0:00:24.042 ********* 2025-07-12 19:52:47.234075 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234080 | orchestrator | 2025-07-12 19:52:47.234085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234090 | orchestrator | Saturday 12 July 2025 19:52:45 +0000 (0:00:00.253) 0:00:24.296 ********* 2025-07-12 19:52:47.234095 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234100 | orchestrator | 2025-07-12 19:52:47.234105 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234110 | orchestrator | Saturday 12 July 2025 19:52:45 +0000 (0:00:00.231) 0:00:24.528 ********* 2025-07-12 19:52:47.234116 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234121 | orchestrator | 2025-07-12 19:52:47.234126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234131 | orchestrator | Saturday 12 July 2025 19:52:45 +0000 (0:00:00.234) 0:00:24.762 ********* 2025-07-12 19:52:47.234136 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234141 | orchestrator | 2025-07-12 19:52:47.234146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234151 | orchestrator | Saturday 12 July 2025 19:52:46 +0000 (0:00:00.685) 0:00:25.448 ********* 2025-07-12 19:52:47.234156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234161 | orchestrator | 2025-07-12 19:52:47.234166 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234171 | orchestrator | Saturday 12 July 2025 19:52:46 +0000 (0:00:00.206) 0:00:25.654 ********* 2025-07-12 19:52:47.234176 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234182 | orchestrator | 2025-07-12 19:52:47.234191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:47.234196 | orchestrator | Saturday 12 July 2025 19:52:46 +0000 (0:00:00.237) 0:00:25.892 ********* 2025-07-12 19:52:47.234202 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:47.234207 | orchestrator | 2025-07-12 19:52:47.234215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:58.388984 | orchestrator | Saturday 12 July 2025 19:52:47 +0000 (0:00:00.234) 0:00:26.127 ********* 2025-07-12 19:52:58.389109 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.389126 | orchestrator | 2025-07-12 19:52:58.389140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:58.389152 | orchestrator | Saturday 12 July 2025 19:52:47 +0000 (0:00:00.212) 0:00:26.339 ********* 2025-07-12 19:52:58.389163 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9) 2025-07-12 19:52:58.389176 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9) 2025-07-12 19:52:58.389187 | orchestrator | 2025-07-12 19:52:58.389198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:58.389209 | orchestrator | Saturday 12 July 2025 19:52:47 +0000 (0:00:00.404) 0:00:26.743 ********* 2025-07-12 19:52:58.389220 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7) 2025-07-12 19:52:58.389231 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7) 2025-07-12 19:52:58.389242 | orchestrator | 2025-07-12 19:52:58.389253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:58.389263 | orchestrator | Saturday 12 July 2025 19:52:48 +0000 (0:00:00.518) 0:00:27.262 ********* 2025-07-12 19:52:58.389274 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350) 2025-07-12 19:52:58.389285 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350) 2025-07-12 19:52:58.389297 | orchestrator | 2025-07-12 19:52:58.389317 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:58.389336 | orchestrator | Saturday 12 July 2025 19:52:48 +0000 (0:00:00.446) 0:00:27.709 ********* 2025-07-12 19:52:58.389348 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb) 2025-07-12 19:52:58.389359 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb) 2025-07-12 19:52:58.389370 | orchestrator | 2025-07-12 19:52:58.389381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:52:58.389392 | orchestrator | Saturday 12 July 2025 19:52:49 +0000 (0:00:00.443) 0:00:28.153 ********* 2025-07-12 19:52:58.389402 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 19:52:58.389413 | orchestrator | 2025-07-12 19:52:58.389424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.389435 | orchestrator | Saturday 12 July 2025 19:52:49 +0000 (0:00:00.414) 0:00:28.567 ********* 2025-07-12 19:52:58.389445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 19:52:58.389472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 19:52:58.389484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 19:52:58.389497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 19:52:58.389511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 19:52:58.389530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 19:52:58.389550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 19:52:58.389588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 19:52:58.389601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 19:52:58.389613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 19:52:58.389626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 19:52:58.389639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 19:52:58.389651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 19:52:58.389664 | orchestrator | 2025-07-12 19:52:58.389675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.389687 | orchestrator | Saturday 12 July 2025 19:52:50 +0000 (0:00:00.589) 0:00:29.157 ********* 2025-07-12 19:52:58.389700 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.389755 | orchestrator | 2025-07-12 19:52:58.389779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.389797 | orchestrator | Saturday 12 July 2025 19:52:50 +0000 (0:00:00.208) 0:00:29.366 ********* 2025-07-12 19:52:58.389816 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.389828 | orchestrator | 2025-07-12 19:52:58.389839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.389850 | orchestrator | Saturday 12 July 2025 19:52:50 +0000 (0:00:00.192) 0:00:29.558 ********* 2025-07-12 19:52:58.389861 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.389871 | orchestrator | 2025-07-12 19:52:58.389882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.389893 | orchestrator | Saturday 12 July 2025 19:52:50 +0000 (0:00:00.223) 0:00:29.782 ********* 2025-07-12 19:52:58.389904 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.389915 | orchestrator | 2025-07-12 19:52:58.389958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.389973 | orchestrator | Saturday 12 July 2025 19:52:51 +0000 (0:00:00.217) 0:00:30.000 ********* 2025-07-12 19:52:58.389984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.389995 | orchestrator | 2025-07-12 19:52:58.390006 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390087 | orchestrator | Saturday 12 July 2025 19:52:51 +0000 (0:00:00.184) 0:00:30.184 ********* 2025-07-12 19:52:58.390100 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390111 | orchestrator | 2025-07-12 19:52:58.390122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390133 | orchestrator | Saturday 12 July 2025 19:52:51 +0000 (0:00:00.255) 0:00:30.440 ********* 2025-07-12 19:52:58.390144 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390155 | orchestrator | 2025-07-12 19:52:58.390165 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390176 | orchestrator | Saturday 12 July 2025 19:52:51 +0000 (0:00:00.241) 0:00:30.681 ********* 2025-07-12 19:52:58.390187 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390197 | orchestrator | 2025-07-12 19:52:58.390208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390219 | orchestrator | Saturday 12 July 2025 19:52:52 +0000 (0:00:00.222) 0:00:30.904 ********* 2025-07-12 19:52:58.390230 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 19:52:58.390240 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-12 19:52:58.390252 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-12 19:52:58.390262 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-12 19:52:58.390273 | orchestrator | 2025-07-12 19:52:58.390285 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390295 | orchestrator | Saturday 12 July 2025 19:52:52 +0000 (0:00:00.986) 0:00:31.891 ********* 2025-07-12 19:52:58.390318 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390328 | orchestrator | 2025-07-12 19:52:58.390339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390350 | orchestrator | Saturday 12 July 2025 19:52:53 +0000 (0:00:00.230) 0:00:32.121 ********* 2025-07-12 19:52:58.390361 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390371 | orchestrator | 2025-07-12 19:52:58.390388 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390408 | orchestrator | Saturday 12 July 2025 19:52:53 +0000 (0:00:00.267) 0:00:32.388 ********* 2025-07-12 19:52:58.390426 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390437 | orchestrator | 2025-07-12 19:52:58.390448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:52:58.390459 | orchestrator | Saturday 12 July 2025 19:52:54 +0000 (0:00:00.800) 0:00:33.189 ********* 2025-07-12 19:52:58.390469 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390480 | orchestrator | 2025-07-12 19:52:58.390491 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 19:52:58.390502 | orchestrator | Saturday 12 July 2025 19:52:54 +0000 (0:00:00.253) 0:00:33.442 ********* 2025-07-12 19:52:58.390513 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390524 | orchestrator | 2025-07-12 19:52:58.390535 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 19:52:58.390546 | orchestrator | Saturday 12 July 2025 19:52:54 +0000 (0:00:00.177) 0:00:33.620 ********* 2025-07-12 19:52:58.390556 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}}) 2025-07-12 19:52:58.390568 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2f895b30-8de9-512a-b128-a5c9585d4791'}}) 2025-07-12 19:52:58.390579 | orchestrator | 2025-07-12 19:52:58.390590 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 19:52:58.390602 | orchestrator | Saturday 12 July 2025 19:52:54 +0000 (0:00:00.256) 0:00:33.876 ********* 2025-07-12 19:52:58.390623 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}) 2025-07-12 19:52:58.390643 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'}) 2025-07-12 19:52:58.390654 | orchestrator | 2025-07-12 19:52:58.390665 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 19:52:58.390675 | orchestrator | Saturday 12 July 2025 19:52:56 +0000 (0:00:01.900) 0:00:35.776 ********* 2025-07-12 19:52:58.390686 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:52:58.390698 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:52:58.390709 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:52:58.390798 | orchestrator | 2025-07-12 19:52:58.390811 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 19:52:58.390822 | orchestrator | Saturday 12 July 2025 19:52:57 +0000 (0:00:00.150) 0:00:35.927 ********* 2025-07-12 19:52:58.390833 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}) 2025-07-12 19:52:58.390851 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'}) 2025-07-12 19:52:58.390869 | orchestrator | 2025-07-12 19:52:58.390898 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 19:53:04.127422 | orchestrator | Saturday 12 July 2025 19:52:58 +0000 (0:00:01.347) 0:00:37.274 ********* 2025-07-12 19:53:04.127569 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.127587 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.127598 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.127613 | orchestrator | 2025-07-12 19:53:04.127632 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 19:53:04.127644 | orchestrator | Saturday 12 July 2025 19:52:58 +0000 (0:00:00.198) 0:00:37.473 ********* 2025-07-12 19:53:04.127655 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.127666 | orchestrator | 2025-07-12 19:53:04.127677 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 19:53:04.127689 | orchestrator | Saturday 12 July 2025 19:52:58 +0000 (0:00:00.156) 0:00:37.629 ********* 2025-07-12 19:53:04.127700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.127758 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.127772 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.127783 | orchestrator | 2025-07-12 19:53:04.127795 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 19:53:04.127805 | orchestrator | Saturday 12 July 2025 19:52:58 +0000 (0:00:00.170) 0:00:37.800 ********* 2025-07-12 19:53:04.127816 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.127826 | orchestrator | 2025-07-12 19:53:04.127837 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 19:53:04.127848 | orchestrator | Saturday 12 July 2025 19:52:59 +0000 (0:00:00.153) 0:00:37.953 ********* 2025-07-12 19:53:04.127859 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.127870 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.127880 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.127891 | orchestrator | 2025-07-12 19:53:04.127902 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 19:53:04.127913 | orchestrator | Saturday 12 July 2025 19:52:59 +0000 (0:00:00.163) 0:00:38.117 ********* 2025-07-12 19:53:04.127932 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.127944 | orchestrator | 2025-07-12 19:53:04.127957 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 19:53:04.127973 | orchestrator | Saturday 12 July 2025 19:52:59 +0000 (0:00:00.362) 0:00:38.479 ********* 2025-07-12 19:53:04.127992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.128011 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.128032 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128051 | orchestrator | 2025-07-12 19:53:04.128070 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 19:53:04.128099 | orchestrator | Saturday 12 July 2025 19:52:59 +0000 (0:00:00.176) 0:00:38.655 ********* 2025-07-12 19:53:04.128120 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:04.128139 | orchestrator | 2025-07-12 19:53:04.128156 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 19:53:04.128173 | orchestrator | Saturday 12 July 2025 19:52:59 +0000 (0:00:00.143) 0:00:38.798 ********* 2025-07-12 19:53:04.128204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.128221 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.128237 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128254 | orchestrator | 2025-07-12 19:53:04.128271 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 19:53:04.128287 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.183) 0:00:38.982 ********* 2025-07-12 19:53:04.128304 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.128320 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.128338 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128354 | orchestrator | 2025-07-12 19:53:04.128370 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 19:53:04.128388 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.161) 0:00:39.144 ********* 2025-07-12 19:53:04.128432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:04.128451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:04.128472 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128490 | orchestrator | 2025-07-12 19:53:04.128509 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 19:53:04.128521 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.165) 0:00:39.310 ********* 2025-07-12 19:53:04.128532 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128545 | orchestrator | 2025-07-12 19:53:04.128563 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 19:53:04.128580 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.131) 0:00:39.441 ********* 2025-07-12 19:53:04.128597 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128618 | orchestrator | 2025-07-12 19:53:04.128637 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 19:53:04.128655 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.121) 0:00:39.562 ********* 2025-07-12 19:53:04.128668 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.128679 | orchestrator | 2025-07-12 19:53:04.128689 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 19:53:04.128700 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.135) 0:00:39.698 ********* 2025-07-12 19:53:04.128738 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 19:53:04.128751 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 19:53:04.128762 | orchestrator | } 2025-07-12 19:53:04.128773 | orchestrator | 2025-07-12 19:53:04.128784 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 19:53:04.128795 | orchestrator | Saturday 12 July 2025 19:53:00 +0000 (0:00:00.153) 0:00:39.852 ********* 2025-07-12 19:53:04.128806 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 19:53:04.128817 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 19:53:04.128828 | orchestrator | } 2025-07-12 19:53:04.128838 | orchestrator | 2025-07-12 19:53:04.128849 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 19:53:04.128860 | orchestrator | Saturday 12 July 2025 19:53:01 +0000 (0:00:00.153) 0:00:40.005 ********* 2025-07-12 19:53:04.128871 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 19:53:04.128882 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 19:53:04.128904 | orchestrator | } 2025-07-12 19:53:04.128915 | orchestrator | 2025-07-12 19:53:04.128926 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 19:53:04.128937 | orchestrator | Saturday 12 July 2025 19:53:01 +0000 (0:00:00.145) 0:00:40.151 ********* 2025-07-12 19:53:04.128948 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:04.128959 | orchestrator | 2025-07-12 19:53:04.128970 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 19:53:04.128981 | orchestrator | Saturday 12 July 2025 19:53:02 +0000 (0:00:00.810) 0:00:40.961 ********* 2025-07-12 19:53:04.129000 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:04.129011 | orchestrator | 2025-07-12 19:53:04.129022 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 19:53:04.129033 | orchestrator | Saturday 12 July 2025 19:53:02 +0000 (0:00:00.530) 0:00:41.491 ********* 2025-07-12 19:53:04.129044 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:04.129055 | orchestrator | 2025-07-12 19:53:04.129066 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 19:53:04.129077 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.498) 0:00:41.990 ********* 2025-07-12 19:53:04.129087 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:04.129098 | orchestrator | 2025-07-12 19:53:04.129109 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 19:53:04.129120 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.148) 0:00:42.138 ********* 2025-07-12 19:53:04.129131 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.129141 | orchestrator | 2025-07-12 19:53:04.129152 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 19:53:04.129163 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.103) 0:00:42.242 ********* 2025-07-12 19:53:04.129174 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.129184 | orchestrator | 2025-07-12 19:53:04.129195 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 19:53:04.129206 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.108) 0:00:42.350 ********* 2025-07-12 19:53:04.129217 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 19:53:04.129228 | orchestrator |  "vgs_report": { 2025-07-12 19:53:04.129239 | orchestrator |  "vg": [] 2025-07-12 19:53:04.129250 | orchestrator |  } 2025-07-12 19:53:04.129260 | orchestrator | } 2025-07-12 19:53:04.129271 | orchestrator | 2025-07-12 19:53:04.129282 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 19:53:04.129293 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.133) 0:00:42.483 ********* 2025-07-12 19:53:04.129304 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.129314 | orchestrator | 2025-07-12 19:53:04.129325 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 19:53:04.129336 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.129) 0:00:42.613 ********* 2025-07-12 19:53:04.129346 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.129357 | orchestrator | 2025-07-12 19:53:04.129368 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 19:53:04.129379 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.136) 0:00:42.750 ********* 2025-07-12 19:53:04.129389 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.129400 | orchestrator | 2025-07-12 19:53:04.129411 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 19:53:04.129421 | orchestrator | Saturday 12 July 2025 19:53:03 +0000 (0:00:00.134) 0:00:42.885 ********* 2025-07-12 19:53:04.129432 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:04.129442 | orchestrator | 2025-07-12 19:53:04.129453 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 19:53:04.129472 | orchestrator | Saturday 12 July 2025 19:53:04 +0000 (0:00:00.130) 0:00:43.016 ********* 2025-07-12 19:53:09.018206 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018315 | orchestrator | 2025-07-12 19:53:09.018353 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 19:53:09.018368 | orchestrator | Saturday 12 July 2025 19:53:04 +0000 (0:00:00.131) 0:00:43.147 ********* 2025-07-12 19:53:09.018379 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018390 | orchestrator | 2025-07-12 19:53:09.018401 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 19:53:09.018412 | orchestrator | Saturday 12 July 2025 19:53:04 +0000 (0:00:00.354) 0:00:43.501 ********* 2025-07-12 19:53:09.018423 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018434 | orchestrator | 2025-07-12 19:53:09.018445 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 19:53:09.018456 | orchestrator | Saturday 12 July 2025 19:53:04 +0000 (0:00:00.172) 0:00:43.673 ********* 2025-07-12 19:53:09.018467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018478 | orchestrator | 2025-07-12 19:53:09.018488 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 19:53:09.018499 | orchestrator | Saturday 12 July 2025 19:53:04 +0000 (0:00:00.192) 0:00:43.866 ********* 2025-07-12 19:53:09.018510 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018520 | orchestrator | 2025-07-12 19:53:09.018531 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 19:53:09.018542 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.128) 0:00:43.995 ********* 2025-07-12 19:53:09.018552 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018563 | orchestrator | 2025-07-12 19:53:09.018573 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 19:53:09.018584 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.141) 0:00:44.136 ********* 2025-07-12 19:53:09.018595 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018606 | orchestrator | 2025-07-12 19:53:09.018616 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 19:53:09.018627 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.124) 0:00:44.261 ********* 2025-07-12 19:53:09.018638 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018648 | orchestrator | 2025-07-12 19:53:09.018659 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 19:53:09.018670 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.143) 0:00:44.404 ********* 2025-07-12 19:53:09.018680 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018691 | orchestrator | 2025-07-12 19:53:09.018702 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 19:53:09.018739 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.137) 0:00:44.541 ********* 2025-07-12 19:53:09.018754 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018768 | orchestrator | 2025-07-12 19:53:09.018781 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 19:53:09.018793 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.136) 0:00:44.678 ********* 2025-07-12 19:53:09.018821 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.018837 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.018850 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018863 | orchestrator | 2025-07-12 19:53:09.018876 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 19:53:09.018889 | orchestrator | Saturday 12 July 2025 19:53:05 +0000 (0:00:00.158) 0:00:44.837 ********* 2025-07-12 19:53:09.018901 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.018914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.018936 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.018948 | orchestrator | 2025-07-12 19:53:09.018960 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 19:53:09.018973 | orchestrator | Saturday 12 July 2025 19:53:06 +0000 (0:00:00.169) 0:00:45.006 ********* 2025-07-12 19:53:09.018986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.018999 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019011 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019024 | orchestrator | 2025-07-12 19:53:09.019036 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 19:53:09.019049 | orchestrator | Saturday 12 July 2025 19:53:06 +0000 (0:00:00.157) 0:00:45.163 ********* 2025-07-12 19:53:09.019061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019087 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019098 | orchestrator | 2025-07-12 19:53:09.019109 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 19:53:09.019136 | orchestrator | Saturday 12 July 2025 19:53:06 +0000 (0:00:00.370) 0:00:45.533 ********* 2025-07-12 19:53:09.019147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019169 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019180 | orchestrator | 2025-07-12 19:53:09.019191 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 19:53:09.019201 | orchestrator | Saturday 12 July 2025 19:53:06 +0000 (0:00:00.156) 0:00:45.690 ********* 2025-07-12 19:53:09.019212 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019223 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019234 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019245 | orchestrator | 2025-07-12 19:53:09.019256 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 19:53:09.019267 | orchestrator | Saturday 12 July 2025 19:53:06 +0000 (0:00:00.195) 0:00:45.885 ********* 2025-07-12 19:53:09.019278 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019289 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019300 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019311 | orchestrator | 2025-07-12 19:53:09.019322 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 19:53:09.019333 | orchestrator | Saturday 12 July 2025 19:53:07 +0000 (0:00:00.153) 0:00:46.039 ********* 2025-07-12 19:53:09.019344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019361 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019372 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019383 | orchestrator | 2025-07-12 19:53:09.019394 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 19:53:09.019441 | orchestrator | Saturday 12 July 2025 19:53:07 +0000 (0:00:00.180) 0:00:46.220 ********* 2025-07-12 19:53:09.019453 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:09.019465 | orchestrator | 2025-07-12 19:53:09.019475 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 19:53:09.019486 | orchestrator | Saturday 12 July 2025 19:53:07 +0000 (0:00:00.537) 0:00:46.757 ********* 2025-07-12 19:53:09.019497 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:09.019508 | orchestrator | 2025-07-12 19:53:09.019518 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 19:53:09.019529 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.523) 0:00:47.281 ********* 2025-07-12 19:53:09.019540 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:09.019551 | orchestrator | 2025-07-12 19:53:09.019561 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 19:53:09.019572 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.146) 0:00:47.428 ********* 2025-07-12 19:53:09.019583 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'vg_name': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'}) 2025-07-12 19:53:09.019595 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'vg_name': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}) 2025-07-12 19:53:09.019606 | orchestrator | 2025-07-12 19:53:09.019617 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 19:53:09.019627 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.170) 0:00:47.598 ********* 2025-07-12 19:53:09.019638 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019660 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:09.019671 | orchestrator | 2025-07-12 19:53:09.019681 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 19:53:09.019692 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.152) 0:00:47.751 ********* 2025-07-12 19:53:09.019703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:09.019735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:09.019754 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:14.493977 | orchestrator | 2025-07-12 19:53:14.494128 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 19:53:14.494145 | orchestrator | Saturday 12 July 2025 19:53:09 +0000 (0:00:00.160) 0:00:47.911 ********* 2025-07-12 19:53:14.494157 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'})  2025-07-12 19:53:14.494169 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'})  2025-07-12 19:53:14.494180 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:14.494192 | orchestrator | 2025-07-12 19:53:14.494203 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 19:53:14.494214 | orchestrator | Saturday 12 July 2025 19:53:09 +0000 (0:00:00.163) 0:00:48.074 ********* 2025-07-12 19:53:14.494247 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 19:53:14.494259 | orchestrator |  "lvm_report": { 2025-07-12 19:53:14.494271 | orchestrator |  "lv": [ 2025-07-12 19:53:14.494282 | orchestrator |  { 2025-07-12 19:53:14.494293 | orchestrator |  "lv_name": "osd-block-2f895b30-8de9-512a-b128-a5c9585d4791", 2025-07-12 19:53:14.494304 | orchestrator |  "vg_name": "ceph-2f895b30-8de9-512a-b128-a5c9585d4791" 2025-07-12 19:53:14.494314 | orchestrator |  }, 2025-07-12 19:53:14.494325 | orchestrator |  { 2025-07-12 19:53:14.494336 | orchestrator |  "lv_name": "osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58", 2025-07-12 19:53:14.494346 | orchestrator |  "vg_name": "ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58" 2025-07-12 19:53:14.494357 | orchestrator |  } 2025-07-12 19:53:14.494367 | orchestrator |  ], 2025-07-12 19:53:14.494378 | orchestrator |  "pv": [ 2025-07-12 19:53:14.494388 | orchestrator |  { 2025-07-12 19:53:14.494399 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 19:53:14.494410 | orchestrator |  "vg_name": "ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58" 2025-07-12 19:53:14.494420 | orchestrator |  }, 2025-07-12 19:53:14.494431 | orchestrator |  { 2025-07-12 19:53:14.494446 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 19:53:14.494464 | orchestrator |  "vg_name": "ceph-2f895b30-8de9-512a-b128-a5c9585d4791" 2025-07-12 19:53:14.494483 | orchestrator |  } 2025-07-12 19:53:14.494502 | orchestrator |  ] 2025-07-12 19:53:14.494520 | orchestrator |  } 2025-07-12 19:53:14.494537 | orchestrator | } 2025-07-12 19:53:14.494550 | orchestrator | 2025-07-12 19:53:14.494563 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 19:53:14.494574 | orchestrator | 2025-07-12 19:53:14.494586 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 19:53:14.494598 | orchestrator | Saturday 12 July 2025 19:53:09 +0000 (0:00:00.507) 0:00:48.582 ********* 2025-07-12 19:53:14.494611 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 19:53:14.494624 | orchestrator | 2025-07-12 19:53:14.494649 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 19:53:14.494661 | orchestrator | Saturday 12 July 2025 19:53:09 +0000 (0:00:00.240) 0:00:48.823 ********* 2025-07-12 19:53:14.494673 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:14.494686 | orchestrator | 2025-07-12 19:53:14.494698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.494739 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.225) 0:00:49.048 ********* 2025-07-12 19:53:14.494754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 19:53:14.494767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 19:53:14.494779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 19:53:14.494791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 19:53:14.494803 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 19:53:14.494815 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 19:53:14.494827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 19:53:14.494840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 19:53:14.494852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 19:53:14.494864 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 19:53:14.494877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 19:53:14.494897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 19:53:14.494908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 19:53:14.494919 | orchestrator | 2025-07-12 19:53:14.494929 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.494940 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.423) 0:00:49.472 ********* 2025-07-12 19:53:14.494950 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.494965 | orchestrator | 2025-07-12 19:53:14.494976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.494987 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.181) 0:00:49.653 ********* 2025-07-12 19:53:14.494998 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495008 | orchestrator | 2025-07-12 19:53:14.495019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495048 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.157) 0:00:49.811 ********* 2025-07-12 19:53:14.495059 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495070 | orchestrator | 2025-07-12 19:53:14.495081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495092 | orchestrator | Saturday 12 July 2025 19:53:11 +0000 (0:00:00.175) 0:00:49.987 ********* 2025-07-12 19:53:14.495102 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495113 | orchestrator | 2025-07-12 19:53:14.495124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495135 | orchestrator | Saturday 12 July 2025 19:53:11 +0000 (0:00:00.190) 0:00:50.178 ********* 2025-07-12 19:53:14.495146 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495156 | orchestrator | 2025-07-12 19:53:14.495167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495178 | orchestrator | Saturday 12 July 2025 19:53:11 +0000 (0:00:00.186) 0:00:50.364 ********* 2025-07-12 19:53:14.495189 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495200 | orchestrator | 2025-07-12 19:53:14.495210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495221 | orchestrator | Saturday 12 July 2025 19:53:11 +0000 (0:00:00.456) 0:00:50.821 ********* 2025-07-12 19:53:14.495232 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495243 | orchestrator | 2025-07-12 19:53:14.495254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495264 | orchestrator | Saturday 12 July 2025 19:53:12 +0000 (0:00:00.191) 0:00:51.013 ********* 2025-07-12 19:53:14.495275 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:14.495286 | orchestrator | 2025-07-12 19:53:14.495297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495308 | orchestrator | Saturday 12 July 2025 19:53:12 +0000 (0:00:00.164) 0:00:51.177 ********* 2025-07-12 19:53:14.495318 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909) 2025-07-12 19:53:14.495330 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909) 2025-07-12 19:53:14.495341 | orchestrator | 2025-07-12 19:53:14.495352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495363 | orchestrator | Saturday 12 July 2025 19:53:12 +0000 (0:00:00.380) 0:00:51.558 ********* 2025-07-12 19:53:14.495373 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8) 2025-07-12 19:53:14.495384 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8) 2025-07-12 19:53:14.495395 | orchestrator | 2025-07-12 19:53:14.495406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495416 | orchestrator | Saturday 12 July 2025 19:53:13 +0000 (0:00:00.383) 0:00:51.942 ********* 2025-07-12 19:53:14.495438 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28) 2025-07-12 19:53:14.495449 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28) 2025-07-12 19:53:14.495460 | orchestrator | 2025-07-12 19:53:14.495471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495482 | orchestrator | Saturday 12 July 2025 19:53:13 +0000 (0:00:00.388) 0:00:52.330 ********* 2025-07-12 19:53:14.495493 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914) 2025-07-12 19:53:14.495503 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914) 2025-07-12 19:53:14.495514 | orchestrator | 2025-07-12 19:53:14.495525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 19:53:14.495536 | orchestrator | Saturday 12 July 2025 19:53:13 +0000 (0:00:00.389) 0:00:52.719 ********* 2025-07-12 19:53:14.495546 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 19:53:14.495557 | orchestrator | 2025-07-12 19:53:14.495568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:14.495578 | orchestrator | Saturday 12 July 2025 19:53:14 +0000 (0:00:00.301) 0:00:53.021 ********* 2025-07-12 19:53:14.495589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 19:53:14.495600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 19:53:14.495610 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 19:53:14.495621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 19:53:14.495632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 19:53:14.495642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 19:53:14.495653 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 19:53:14.495664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 19:53:14.495674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 19:53:14.495685 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 19:53:14.495696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 19:53:14.495735 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 19:53:22.823657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 19:53:22.823836 | orchestrator | 2025-07-12 19:53:22.823854 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.823867 | orchestrator | Saturday 12 July 2025 19:53:14 +0000 (0:00:00.364) 0:00:53.385 ********* 2025-07-12 19:53:22.823879 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.823892 | orchestrator | 2025-07-12 19:53:22.823903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.823915 | orchestrator | Saturday 12 July 2025 19:53:14 +0000 (0:00:00.172) 0:00:53.558 ********* 2025-07-12 19:53:22.823926 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.823937 | orchestrator | 2025-07-12 19:53:22.823948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.823959 | orchestrator | Saturday 12 July 2025 19:53:14 +0000 (0:00:00.189) 0:00:53.748 ********* 2025-07-12 19:53:22.823970 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.823981 | orchestrator | 2025-07-12 19:53:22.823992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824027 | orchestrator | Saturday 12 July 2025 19:53:15 +0000 (0:00:00.468) 0:00:54.216 ********* 2025-07-12 19:53:22.824038 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824049 | orchestrator | 2025-07-12 19:53:22.824060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824071 | orchestrator | Saturday 12 July 2025 19:53:15 +0000 (0:00:00.180) 0:00:54.397 ********* 2025-07-12 19:53:22.824082 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824093 | orchestrator | 2025-07-12 19:53:22.824104 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824115 | orchestrator | Saturday 12 July 2025 19:53:15 +0000 (0:00:00.183) 0:00:54.581 ********* 2025-07-12 19:53:22.824126 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824137 | orchestrator | 2025-07-12 19:53:22.824148 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824159 | orchestrator | Saturday 12 July 2025 19:53:15 +0000 (0:00:00.212) 0:00:54.793 ********* 2025-07-12 19:53:22.824170 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824181 | orchestrator | 2025-07-12 19:53:22.824193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824206 | orchestrator | Saturday 12 July 2025 19:53:16 +0000 (0:00:00.200) 0:00:54.994 ********* 2025-07-12 19:53:22.824219 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824231 | orchestrator | 2025-07-12 19:53:22.824243 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824255 | orchestrator | Saturday 12 July 2025 19:53:16 +0000 (0:00:00.184) 0:00:55.179 ********* 2025-07-12 19:53:22.824267 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 19:53:22.824279 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-12 19:53:22.824292 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-12 19:53:22.824304 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-12 19:53:22.824316 | orchestrator | 2025-07-12 19:53:22.824328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824341 | orchestrator | Saturday 12 July 2025 19:53:16 +0000 (0:00:00.576) 0:00:55.755 ********* 2025-07-12 19:53:22.824353 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824365 | orchestrator | 2025-07-12 19:53:22.824377 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824390 | orchestrator | Saturday 12 July 2025 19:53:17 +0000 (0:00:00.179) 0:00:55.935 ********* 2025-07-12 19:53:22.824402 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824414 | orchestrator | 2025-07-12 19:53:22.824427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824439 | orchestrator | Saturday 12 July 2025 19:53:17 +0000 (0:00:00.176) 0:00:56.111 ********* 2025-07-12 19:53:22.824451 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824464 | orchestrator | 2025-07-12 19:53:22.824477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 19:53:22.824489 | orchestrator | Saturday 12 July 2025 19:53:17 +0000 (0:00:00.174) 0:00:56.286 ********* 2025-07-12 19:53:22.824502 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824514 | orchestrator | 2025-07-12 19:53:22.824527 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 19:53:22.824539 | orchestrator | Saturday 12 July 2025 19:53:17 +0000 (0:00:00.179) 0:00:56.465 ********* 2025-07-12 19:53:22.824550 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824561 | orchestrator | 2025-07-12 19:53:22.824572 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 19:53:22.824583 | orchestrator | Saturday 12 July 2025 19:53:17 +0000 (0:00:00.255) 0:00:56.721 ********* 2025-07-12 19:53:22.824594 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}}) 2025-07-12 19:53:22.824606 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '71032f38-677b-542f-825f-c43a6d71b028'}}) 2025-07-12 19:53:22.824624 | orchestrator | 2025-07-12 19:53:22.824635 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 19:53:22.824646 | orchestrator | Saturday 12 July 2025 19:53:18 +0000 (0:00:00.196) 0:00:56.917 ********* 2025-07-12 19:53:22.824658 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}) 2025-07-12 19:53:22.824670 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'}) 2025-07-12 19:53:22.824681 | orchestrator | 2025-07-12 19:53:22.824692 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 19:53:22.824738 | orchestrator | Saturday 12 July 2025 19:53:19 +0000 (0:00:01.924) 0:00:58.842 ********* 2025-07-12 19:53:22.824751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:22.824763 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:22.824774 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824785 | orchestrator | 2025-07-12 19:53:22.824796 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 19:53:22.824807 | orchestrator | Saturday 12 July 2025 19:53:20 +0000 (0:00:00.145) 0:00:58.988 ********* 2025-07-12 19:53:22.824818 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}) 2025-07-12 19:53:22.824846 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'}) 2025-07-12 19:53:22.824858 | orchestrator | 2025-07-12 19:53:22.824869 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 19:53:22.824880 | orchestrator | Saturday 12 July 2025 19:53:21 +0000 (0:00:01.314) 0:01:00.303 ********* 2025-07-12 19:53:22.824891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:22.824902 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:22.824913 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824924 | orchestrator | 2025-07-12 19:53:22.824935 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 19:53:22.824946 | orchestrator | Saturday 12 July 2025 19:53:21 +0000 (0:00:00.136) 0:01:00.440 ********* 2025-07-12 19:53:22.824957 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.824968 | orchestrator | 2025-07-12 19:53:22.824979 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 19:53:22.824990 | orchestrator | Saturday 12 July 2025 19:53:21 +0000 (0:00:00.117) 0:01:00.557 ********* 2025-07-12 19:53:22.825001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:22.825017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:22.825028 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.825039 | orchestrator | 2025-07-12 19:53:22.825050 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 19:53:22.825061 | orchestrator | Saturday 12 July 2025 19:53:21 +0000 (0:00:00.130) 0:01:00.688 ********* 2025-07-12 19:53:22.825072 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.825090 | orchestrator | 2025-07-12 19:53:22.825101 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 19:53:22.825112 | orchestrator | Saturday 12 July 2025 19:53:21 +0000 (0:00:00.114) 0:01:00.802 ********* 2025-07-12 19:53:22.825123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:22.825135 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:22.825146 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.825157 | orchestrator | 2025-07-12 19:53:22.825168 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 19:53:22.825178 | orchestrator | Saturday 12 July 2025 19:53:22 +0000 (0:00:00.137) 0:01:00.939 ********* 2025-07-12 19:53:22.825189 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.825200 | orchestrator | 2025-07-12 19:53:22.825211 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 19:53:22.825222 | orchestrator | Saturday 12 July 2025 19:53:22 +0000 (0:00:00.122) 0:01:01.061 ********* 2025-07-12 19:53:22.825233 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:22.825244 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:22.825255 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:22.825266 | orchestrator | 2025-07-12 19:53:22.825277 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 19:53:22.825288 | orchestrator | Saturday 12 July 2025 19:53:22 +0000 (0:00:00.140) 0:01:01.202 ********* 2025-07-12 19:53:22.825298 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:22.825309 | orchestrator | 2025-07-12 19:53:22.825320 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 19:53:22.825331 | orchestrator | Saturday 12 July 2025 19:53:22 +0000 (0:00:00.135) 0:01:01.338 ********* 2025-07-12 19:53:22.825349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:28.584638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:28.584769 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.584786 | orchestrator | 2025-07-12 19:53:28.584799 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 19:53:28.584811 | orchestrator | Saturday 12 July 2025 19:53:22 +0000 (0:00:00.381) 0:01:01.719 ********* 2025-07-12 19:53:28.584822 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:28.584833 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:28.584844 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.584856 | orchestrator | 2025-07-12 19:53:28.584867 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 19:53:28.584878 | orchestrator | Saturday 12 July 2025 19:53:22 +0000 (0:00:00.148) 0:01:01.868 ********* 2025-07-12 19:53:28.584889 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:28.584901 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:28.584912 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.584943 | orchestrator | 2025-07-12 19:53:28.584954 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 19:53:28.584965 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.158) 0:01:02.026 ********* 2025-07-12 19:53:28.584976 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.584987 | orchestrator | 2025-07-12 19:53:28.584997 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 19:53:28.585008 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.128) 0:01:02.155 ********* 2025-07-12 19:53:28.585019 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585030 | orchestrator | 2025-07-12 19:53:28.585041 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 19:53:28.585052 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.136) 0:01:02.291 ********* 2025-07-12 19:53:28.585062 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585073 | orchestrator | 2025-07-12 19:53:28.585084 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 19:53:28.585106 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.147) 0:01:02.439 ********* 2025-07-12 19:53:28.585118 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 19:53:28.585129 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 19:53:28.585141 | orchestrator | } 2025-07-12 19:53:28.585152 | orchestrator | 2025-07-12 19:53:28.585162 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 19:53:28.585173 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.151) 0:01:02.591 ********* 2025-07-12 19:53:28.585184 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 19:53:28.585198 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 19:53:28.585210 | orchestrator | } 2025-07-12 19:53:28.585222 | orchestrator | 2025-07-12 19:53:28.585235 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 19:53:28.585248 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.140) 0:01:02.731 ********* 2025-07-12 19:53:28.585260 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 19:53:28.585273 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 19:53:28.585285 | orchestrator | } 2025-07-12 19:53:28.585297 | orchestrator | 2025-07-12 19:53:28.585309 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 19:53:28.585322 | orchestrator | Saturday 12 July 2025 19:53:23 +0000 (0:00:00.143) 0:01:02.875 ********* 2025-07-12 19:53:28.585334 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:28.585346 | orchestrator | 2025-07-12 19:53:28.585358 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 19:53:28.585370 | orchestrator | Saturday 12 July 2025 19:53:24 +0000 (0:00:00.513) 0:01:03.389 ********* 2025-07-12 19:53:28.585383 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:28.585395 | orchestrator | 2025-07-12 19:53:28.585407 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 19:53:28.585419 | orchestrator | Saturday 12 July 2025 19:53:24 +0000 (0:00:00.510) 0:01:03.899 ********* 2025-07-12 19:53:28.585431 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:28.585443 | orchestrator | 2025-07-12 19:53:28.585455 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 19:53:28.585466 | orchestrator | Saturday 12 July 2025 19:53:25 +0000 (0:00:00.522) 0:01:04.421 ********* 2025-07-12 19:53:28.585478 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:28.585490 | orchestrator | 2025-07-12 19:53:28.585503 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 19:53:28.585515 | orchestrator | Saturday 12 July 2025 19:53:25 +0000 (0:00:00.343) 0:01:04.765 ********* 2025-07-12 19:53:28.585527 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585539 | orchestrator | 2025-07-12 19:53:28.585551 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 19:53:28.585563 | orchestrator | Saturday 12 July 2025 19:53:25 +0000 (0:00:00.118) 0:01:04.883 ********* 2025-07-12 19:53:28.585583 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585595 | orchestrator | 2025-07-12 19:53:28.585606 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 19:53:28.585617 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.110) 0:01:04.993 ********* 2025-07-12 19:53:28.585628 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 19:53:28.585639 | orchestrator |  "vgs_report": { 2025-07-12 19:53:28.585650 | orchestrator |  "vg": [] 2025-07-12 19:53:28.585676 | orchestrator |  } 2025-07-12 19:53:28.585688 | orchestrator | } 2025-07-12 19:53:28.585700 | orchestrator | 2025-07-12 19:53:28.585729 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 19:53:28.585740 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.148) 0:01:05.142 ********* 2025-07-12 19:53:28.585751 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585762 | orchestrator | 2025-07-12 19:53:28.585772 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 19:53:28.585783 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.142) 0:01:05.284 ********* 2025-07-12 19:53:28.585794 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585805 | orchestrator | 2025-07-12 19:53:28.585816 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 19:53:28.585826 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.139) 0:01:05.423 ********* 2025-07-12 19:53:28.585837 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585848 | orchestrator | 2025-07-12 19:53:28.585859 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 19:53:28.585870 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.121) 0:01:05.544 ********* 2025-07-12 19:53:28.585880 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585891 | orchestrator | 2025-07-12 19:53:28.585902 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 19:53:28.585913 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.133) 0:01:05.678 ********* 2025-07-12 19:53:28.585924 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585935 | orchestrator | 2025-07-12 19:53:28.585946 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 19:53:28.585956 | orchestrator | Saturday 12 July 2025 19:53:26 +0000 (0:00:00.145) 0:01:05.824 ********* 2025-07-12 19:53:28.585967 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.585978 | orchestrator | 2025-07-12 19:53:28.585989 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 19:53:28.585999 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.129) 0:01:05.953 ********* 2025-07-12 19:53:28.586010 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586067 | orchestrator | 2025-07-12 19:53:28.586079 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 19:53:28.586090 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.111) 0:01:06.065 ********* 2025-07-12 19:53:28.586101 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586112 | orchestrator | 2025-07-12 19:53:28.586123 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 19:53:28.586133 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.132) 0:01:06.197 ********* 2025-07-12 19:53:28.586144 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586155 | orchestrator | 2025-07-12 19:53:28.586165 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 19:53:28.586182 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.247) 0:01:06.445 ********* 2025-07-12 19:53:28.586193 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586204 | orchestrator | 2025-07-12 19:53:28.586214 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 19:53:28.586225 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.127) 0:01:06.573 ********* 2025-07-12 19:53:28.586236 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586253 | orchestrator | 2025-07-12 19:53:28.586264 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 19:53:28.586275 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.118) 0:01:06.692 ********* 2025-07-12 19:53:28.586285 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586296 | orchestrator | 2025-07-12 19:53:28.586307 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 19:53:28.586318 | orchestrator | Saturday 12 July 2025 19:53:27 +0000 (0:00:00.116) 0:01:06.808 ********* 2025-07-12 19:53:28.586328 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586339 | orchestrator | 2025-07-12 19:53:28.586350 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 19:53:28.586360 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.124) 0:01:06.932 ********* 2025-07-12 19:53:28.586371 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586382 | orchestrator | 2025-07-12 19:53:28.586392 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 19:53:28.586403 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.128) 0:01:07.061 ********* 2025-07-12 19:53:28.586414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:28.586425 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:28.586436 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586446 | orchestrator | 2025-07-12 19:53:28.586457 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 19:53:28.586468 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.143) 0:01:07.205 ********* 2025-07-12 19:53:28.586479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:28.586490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:28.586500 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:28.586511 | orchestrator | 2025-07-12 19:53:28.586522 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 19:53:28.586533 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.133) 0:01:07.338 ********* 2025-07-12 19:53:28.586551 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.327818 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.327936 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.327952 | orchestrator | 2025-07-12 19:53:31.327965 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 19:53:31.327977 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.143) 0:01:07.481 ********* 2025-07-12 19:53:31.327988 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.327999 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328010 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328033 | orchestrator | 2025-07-12 19:53:31.328055 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 19:53:31.328066 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.136) 0:01:07.618 ********* 2025-07-12 19:53:31.328086 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328118 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328130 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328141 | orchestrator | 2025-07-12 19:53:31.328152 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 19:53:31.328163 | orchestrator | Saturday 12 July 2025 19:53:28 +0000 (0:00:00.145) 0:01:07.763 ********* 2025-07-12 19:53:31.328174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328207 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328219 | orchestrator | 2025-07-12 19:53:31.328230 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 19:53:31.328241 | orchestrator | Saturday 12 July 2025 19:53:29 +0000 (0:00:00.138) 0:01:07.902 ********* 2025-07-12 19:53:31.328252 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328274 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328285 | orchestrator | 2025-07-12 19:53:31.328295 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 19:53:31.328307 | orchestrator | Saturday 12 July 2025 19:53:29 +0000 (0:00:00.277) 0:01:08.179 ********* 2025-07-12 19:53:31.328318 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328330 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328342 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328354 | orchestrator | 2025-07-12 19:53:31.328367 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 19:53:31.328379 | orchestrator | Saturday 12 July 2025 19:53:29 +0000 (0:00:00.136) 0:01:08.316 ********* 2025-07-12 19:53:31.328392 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:31.328404 | orchestrator | 2025-07-12 19:53:31.328417 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 19:53:31.328429 | orchestrator | Saturday 12 July 2025 19:53:29 +0000 (0:00:00.527) 0:01:08.844 ********* 2025-07-12 19:53:31.328441 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:31.328454 | orchestrator | 2025-07-12 19:53:31.328466 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 19:53:31.328479 | orchestrator | Saturday 12 July 2025 19:53:30 +0000 (0:00:00.511) 0:01:09.355 ********* 2025-07-12 19:53:31.328491 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:31.328503 | orchestrator | 2025-07-12 19:53:31.328515 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 19:53:31.328527 | orchestrator | Saturday 12 July 2025 19:53:30 +0000 (0:00:00.136) 0:01:09.492 ********* 2025-07-12 19:53:31.328540 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'vg_name': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}) 2025-07-12 19:53:31.328553 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'vg_name': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'}) 2025-07-12 19:53:31.328566 | orchestrator | 2025-07-12 19:53:31.328578 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 19:53:31.328597 | orchestrator | Saturday 12 July 2025 19:53:30 +0000 (0:00:00.149) 0:01:09.641 ********* 2025-07-12 19:53:31.328625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328651 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328663 | orchestrator | 2025-07-12 19:53:31.328675 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 19:53:31.328688 | orchestrator | Saturday 12 July 2025 19:53:30 +0000 (0:00:00.147) 0:01:09.789 ********* 2025-07-12 19:53:31.328700 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328728 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328740 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328751 | orchestrator | 2025-07-12 19:53:31.328762 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 19:53:31.328773 | orchestrator | Saturday 12 July 2025 19:53:31 +0000 (0:00:00.142) 0:01:09.931 ********* 2025-07-12 19:53:31.328784 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'})  2025-07-12 19:53:31.328809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'})  2025-07-12 19:53:31.328820 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:31.328831 | orchestrator | 2025-07-12 19:53:31.328842 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 19:53:31.328854 | orchestrator | Saturday 12 July 2025 19:53:31 +0000 (0:00:00.132) 0:01:10.064 ********* 2025-07-12 19:53:31.328865 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 19:53:31.328876 | orchestrator |  "lvm_report": { 2025-07-12 19:53:31.328887 | orchestrator |  "lv": [ 2025-07-12 19:53:31.328898 | orchestrator |  { 2025-07-12 19:53:31.328909 | orchestrator |  "lv_name": "osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a", 2025-07-12 19:53:31.328925 | orchestrator |  "vg_name": "ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a" 2025-07-12 19:53:31.328937 | orchestrator |  }, 2025-07-12 19:53:31.328948 | orchestrator |  { 2025-07-12 19:53:31.328959 | orchestrator |  "lv_name": "osd-block-71032f38-677b-542f-825f-c43a6d71b028", 2025-07-12 19:53:31.328970 | orchestrator |  "vg_name": "ceph-71032f38-677b-542f-825f-c43a6d71b028" 2025-07-12 19:53:31.328981 | orchestrator |  } 2025-07-12 19:53:31.328992 | orchestrator |  ], 2025-07-12 19:53:31.329003 | orchestrator |  "pv": [ 2025-07-12 19:53:31.329014 | orchestrator |  { 2025-07-12 19:53:31.329025 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 19:53:31.329036 | orchestrator |  "vg_name": "ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a" 2025-07-12 19:53:31.329047 | orchestrator |  }, 2025-07-12 19:53:31.329058 | orchestrator |  { 2025-07-12 19:53:31.329069 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 19:53:31.329080 | orchestrator |  "vg_name": "ceph-71032f38-677b-542f-825f-c43a6d71b028" 2025-07-12 19:53:31.329091 | orchestrator |  } 2025-07-12 19:53:31.329102 | orchestrator |  ] 2025-07-12 19:53:31.329113 | orchestrator |  } 2025-07-12 19:53:31.329124 | orchestrator | } 2025-07-12 19:53:31.329135 | orchestrator | 2025-07-12 19:53:31.329146 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:53:31.329164 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 19:53:31.329175 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 19:53:31.329186 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 19:53:31.329198 | orchestrator | 2025-07-12 19:53:31.329209 | orchestrator | 2025-07-12 19:53:31.329220 | orchestrator | 2025-07-12 19:53:31.329231 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:53:31.329241 | orchestrator | Saturday 12 July 2025 19:53:31 +0000 (0:00:00.136) 0:01:10.201 ********* 2025-07-12 19:53:31.329252 | orchestrator | =============================================================================== 2025-07-12 19:53:31.329263 | orchestrator | Create block VGs -------------------------------------------------------- 5.80s 2025-07-12 19:53:31.329274 | orchestrator | Create block LVs -------------------------------------------------------- 4.11s 2025-07-12 19:53:31.329285 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.98s 2025-07-12 19:53:31.329296 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.58s 2025-07-12 19:53:31.329307 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-07-12 19:53:31.329317 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.54s 2025-07-12 19:53:31.329328 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-07-12 19:53:31.329339 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2025-07-12 19:53:31.329356 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-07-12 19:53:31.569606 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-07-12 19:53:31.569690 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2025-07-12 19:53:31.569729 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-07-12 19:53:31.569742 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-07-12 19:53:31.569752 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.76s 2025-07-12 19:53:31.569762 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.70s 2025-07-12 19:53:31.569772 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.70s 2025-07-12 19:53:31.569782 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.70s 2025-07-12 19:53:31.569792 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-07-12 19:53:31.569802 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.66s 2025-07-12 19:53:31.569812 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.65s 2025-07-12 19:53:43.482338 | orchestrator | 2025-07-12 19:53:43 | INFO  | Task eaea9806-f94b-48ae-8116-6b49c8e1f00e (facts) was prepared for execution. 2025-07-12 19:53:43.482441 | orchestrator | 2025-07-12 19:53:43 | INFO  | It takes a moment until task eaea9806-f94b-48ae-8116-6b49c8e1f00e (facts) has been started and output is visible here. 2025-07-12 19:53:54.821131 | orchestrator | 2025-07-12 19:53:54.821191 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 19:53:54.821197 | orchestrator | 2025-07-12 19:53:54.821202 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 19:53:54.821207 | orchestrator | Saturday 12 July 2025 19:53:47 +0000 (0:00:00.250) 0:00:00.250 ********* 2025-07-12 19:53:54.821211 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:54.821215 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:53:54.821231 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:53:54.821235 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:53:54.821239 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:53:54.821243 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:54.821247 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:54.821251 | orchestrator | 2025-07-12 19:53:54.821255 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 19:53:54.821259 | orchestrator | Saturday 12 July 2025 19:53:48 +0000 (0:00:00.971) 0:00:01.222 ********* 2025-07-12 19:53:54.821270 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:53:54.821274 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:53:54.821279 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:53:54.821282 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:53:54.821286 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:53:54.821290 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:54.821294 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:54.821298 | orchestrator | 2025-07-12 19:53:54.821302 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:53:54.821305 | orchestrator | 2025-07-12 19:53:54.821309 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:53:54.821313 | orchestrator | Saturday 12 July 2025 19:53:49 +0000 (0:00:01.083) 0:00:02.305 ********* 2025-07-12 19:53:54.821317 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:53:54.821321 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:53:54.821324 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:53:54.821328 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:54.821332 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:53:54.821336 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:53:54.821339 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:53:54.821343 | orchestrator | 2025-07-12 19:53:54.821347 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 19:53:54.821351 | orchestrator | 2025-07-12 19:53:54.821354 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 19:53:54.821358 | orchestrator | Saturday 12 July 2025 19:53:54 +0000 (0:00:04.608) 0:00:06.913 ********* 2025-07-12 19:53:54.821362 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:53:54.821366 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:53:54.821369 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:53:54.821373 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:53:54.821377 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:53:54.821381 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:53:54.821384 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:53:54.821388 | orchestrator | 2025-07-12 19:53:54.821392 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:53:54.821396 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821400 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821404 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821408 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821411 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821415 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821419 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:53:54.821425 | orchestrator | 2025-07-12 19:53:54.821429 | orchestrator | 2025-07-12 19:53:54.821433 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:53:54.821437 | orchestrator | Saturday 12 July 2025 19:53:54 +0000 (0:00:00.456) 0:00:07.370 ********* 2025-07-12 19:53:54.821441 | orchestrator | =============================================================================== 2025-07-12 19:53:54.821444 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2025-07-12 19:53:54.821448 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.08s 2025-07-12 19:53:54.821452 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-07-12 19:53:54.821456 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-07-12 19:54:06.767992 | orchestrator | 2025-07-12 19:54:06 | INFO  | Task 4f2c3c65-9837-4116-bec8-52c158b88f51 (frr) was prepared for execution. 2025-07-12 19:54:06.768092 | orchestrator | 2025-07-12 19:54:06 | INFO  | It takes a moment until task 4f2c3c65-9837-4116-bec8-52c158b88f51 (frr) has been started and output is visible here. 2025-07-12 19:54:30.385057 | orchestrator | 2025-07-12 19:54:30.385173 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-07-12 19:54:30.385192 | orchestrator | 2025-07-12 19:54:30.385205 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-07-12 19:54:30.385217 | orchestrator | Saturday 12 July 2025 19:54:10 +0000 (0:00:00.177) 0:00:00.177 ********* 2025-07-12 19:54:30.385229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:54:30.385241 | orchestrator | 2025-07-12 19:54:30.385253 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-07-12 19:54:30.385264 | orchestrator | Saturday 12 July 2025 19:54:10 +0000 (0:00:00.172) 0:00:00.350 ********* 2025-07-12 19:54:30.385275 | orchestrator | changed: [testbed-manager] 2025-07-12 19:54:30.385287 | orchestrator | 2025-07-12 19:54:30.385299 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-07-12 19:54:30.385310 | orchestrator | Saturday 12 July 2025 19:54:11 +0000 (0:00:01.024) 0:00:01.375 ********* 2025-07-12 19:54:30.385321 | orchestrator | changed: [testbed-manager] 2025-07-12 19:54:30.385332 | orchestrator | 2025-07-12 19:54:30.385350 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-07-12 19:54:30.385362 | orchestrator | Saturday 12 July 2025 19:54:19 +0000 (0:00:08.195) 0:00:09.570 ********* 2025-07-12 19:54:30.385373 | orchestrator | ok: [testbed-manager] 2025-07-12 19:54:30.385385 | orchestrator | 2025-07-12 19:54:30.385397 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-07-12 19:54:30.385408 | orchestrator | Saturday 12 July 2025 19:54:21 +0000 (0:00:01.187) 0:00:10.758 ********* 2025-07-12 19:54:30.385419 | orchestrator | changed: [testbed-manager] 2025-07-12 19:54:30.385431 | orchestrator | 2025-07-12 19:54:30.385442 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-07-12 19:54:30.385453 | orchestrator | Saturday 12 July 2025 19:54:22 +0000 (0:00:00.905) 0:00:11.663 ********* 2025-07-12 19:54:30.385464 | orchestrator | ok: [testbed-manager] 2025-07-12 19:54:30.385475 | orchestrator | 2025-07-12 19:54:30.385486 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-07-12 19:54:30.385498 | orchestrator | Saturday 12 July 2025 19:54:23 +0000 (0:00:01.187) 0:00:12.851 ********* 2025-07-12 19:54:30.385509 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:54:30.385520 | orchestrator | 2025-07-12 19:54:30.385531 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-07-12 19:54:30.385542 | orchestrator | Saturday 12 July 2025 19:54:24 +0000 (0:00:00.805) 0:00:13.657 ********* 2025-07-12 19:54:30.385553 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:54:30.385564 | orchestrator | 2025-07-12 19:54:30.385576 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-07-12 19:54:30.385604 | orchestrator | Saturday 12 July 2025 19:54:24 +0000 (0:00:00.156) 0:00:13.814 ********* 2025-07-12 19:54:30.385615 | orchestrator | changed: [testbed-manager] 2025-07-12 19:54:30.385626 | orchestrator | 2025-07-12 19:54:30.385637 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-07-12 19:54:30.385648 | orchestrator | Saturday 12 July 2025 19:54:25 +0000 (0:00:00.973) 0:00:14.787 ********* 2025-07-12 19:54:30.385659 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-07-12 19:54:30.385670 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-07-12 19:54:30.385716 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-07-12 19:54:30.385735 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-07-12 19:54:30.385754 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-07-12 19:54:30.385774 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-07-12 19:54:30.385786 | orchestrator | 2025-07-12 19:54:30.385797 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-07-12 19:54:30.385808 | orchestrator | Saturday 12 July 2025 19:54:27 +0000 (0:00:02.185) 0:00:16.973 ********* 2025-07-12 19:54:30.385818 | orchestrator | ok: [testbed-manager] 2025-07-12 19:54:30.385829 | orchestrator | 2025-07-12 19:54:30.385840 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-07-12 19:54:30.385850 | orchestrator | Saturday 12 July 2025 19:54:28 +0000 (0:00:01.370) 0:00:18.344 ********* 2025-07-12 19:54:30.385861 | orchestrator | changed: [testbed-manager] 2025-07-12 19:54:30.385871 | orchestrator | 2025-07-12 19:54:30.385882 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:54:30.385893 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:54:30.385904 | orchestrator | 2025-07-12 19:54:30.385914 | orchestrator | 2025-07-12 19:54:30.385925 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:54:30.385935 | orchestrator | Saturday 12 July 2025 19:54:30 +0000 (0:00:01.353) 0:00:19.697 ********* 2025-07-12 19:54:30.385946 | orchestrator | =============================================================================== 2025-07-12 19:54:30.385957 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.20s 2025-07-12 19:54:30.385968 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.19s 2025-07-12 19:54:30.385979 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.37s 2025-07-12 19:54:30.385990 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.35s 2025-07-12 19:54:30.386086 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2025-07-12 19:54:30.386103 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.19s 2025-07-12 19:54:30.386114 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.02s 2025-07-12 19:54:30.386125 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.97s 2025-07-12 19:54:30.386136 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2025-07-12 19:54:30.386147 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.81s 2025-07-12 19:54:30.386158 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.17s 2025-07-12 19:54:30.386169 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.16s 2025-07-12 19:54:30.656366 | orchestrator | 2025-07-12 19:54:30.660025 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jul 12 19:54:30 UTC 2025 2025-07-12 19:54:30.660086 | orchestrator | 2025-07-12 19:54:32.390273 | orchestrator | 2025-07-12 19:54:32 | INFO  | Collection nutshell is prepared for execution 2025-07-12 19:54:32.390368 | orchestrator | 2025-07-12 19:54:32 | INFO  | D [0] - dotfiles 2025-07-12 19:54:42.487742 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [0] - homer 2025-07-12 19:54:42.487859 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [0] - netdata 2025-07-12 19:54:42.487877 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [0] - openstackclient 2025-07-12 19:54:42.487889 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [0] - phpmyadmin 2025-07-12 19:54:42.487900 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [0] - common 2025-07-12 19:54:42.493085 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [1] -- loadbalancer 2025-07-12 19:54:42.493404 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [2] --- opensearch 2025-07-12 19:54:42.493597 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [2] --- mariadb-ng 2025-07-12 19:54:42.493863 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [3] ---- horizon 2025-07-12 19:54:42.494129 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [3] ---- keystone 2025-07-12 19:54:42.494461 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [4] ----- neutron 2025-07-12 19:54:42.494661 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [5] ------ wait-for-nova 2025-07-12 19:54:42.494824 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [5] ------ octavia 2025-07-12 19:54:42.496175 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [4] ----- barbican 2025-07-12 19:54:42.496331 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [4] ----- designate 2025-07-12 19:54:42.496417 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [4] ----- ironic 2025-07-12 19:54:42.496434 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [4] ----- placement 2025-07-12 19:54:42.496613 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [4] ----- magnum 2025-07-12 19:54:42.497419 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [1] -- openvswitch 2025-07-12 19:54:42.497544 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [2] --- ovn 2025-07-12 19:54:42.497920 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [1] -- memcached 2025-07-12 19:54:42.498253 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [1] -- redis 2025-07-12 19:54:42.498274 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [1] -- rabbitmq-ng 2025-07-12 19:54:42.498551 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [0] - kubernetes 2025-07-12 19:54:42.501525 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [1] -- kubeconfig 2025-07-12 19:54:42.501614 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [1] -- copy-kubeconfig 2025-07-12 19:54:42.501630 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [0] - ceph 2025-07-12 19:54:42.504257 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [1] -- ceph-pools 2025-07-12 19:54:42.504460 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [2] --- copy-ceph-keys 2025-07-12 19:54:42.504493 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [3] ---- cephclient 2025-07-12 19:54:42.504624 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-12 19:54:42.504664 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [4] ----- wait-for-keystone 2025-07-12 19:54:42.504823 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-12 19:54:42.504849 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [5] ------ glance 2025-07-12 19:54:42.504867 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [5] ------ cinder 2025-07-12 19:54:42.504884 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [5] ------ nova 2025-07-12 19:54:42.505056 | orchestrator | 2025-07-12 19:54:42 | INFO  | A [4] ----- prometheus 2025-07-12 19:54:42.505074 | orchestrator | 2025-07-12 19:54:42 | INFO  | D [5] ------ grafana 2025-07-12 19:54:42.709897 | orchestrator | 2025-07-12 19:54:42 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-12 19:54:42.710003 | orchestrator | 2025-07-12 19:54:42 | INFO  | Tasks are running in the background 2025-07-12 19:54:45.588213 | orchestrator | 2025-07-12 19:54:45 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-12 19:54:47.716024 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:54:47.716263 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:54:47.719741 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:54:47.720146 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:54:47.720592 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:54:47.721214 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:54:47.721737 | orchestrator | 2025-07-12 19:54:47 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:54:47.721763 | orchestrator | 2025-07-12 19:54:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:54:50.767889 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:54:50.768583 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:54:50.769277 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:54:50.769312 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:54:50.769933 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:54:50.770805 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:54:50.771326 | orchestrator | 2025-07-12 19:54:50 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:54:50.771377 | orchestrator | 2025-07-12 19:54:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:54:53.808904 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:54:53.813971 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:54:53.817050 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:54:53.817906 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:54:53.818414 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:54:53.818988 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:54:53.820244 | orchestrator | 2025-07-12 19:54:53 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:54:53.820262 | orchestrator | 2025-07-12 19:54:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:54:56.911300 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:54:56.911412 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:54:56.917289 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:54:56.923397 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:54:56.923443 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:54:56.923455 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:54:56.923466 | orchestrator | 2025-07-12 19:54:56 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:54:56.923477 | orchestrator | 2025-07-12 19:54:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:54:59.983317 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:54:59.986335 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:54:59.986416 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:54:59.986431 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:54:59.986443 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:54:59.986455 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:54:59.986574 | orchestrator | 2025-07-12 19:54:59 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:54:59.986591 | orchestrator | 2025-07-12 19:54:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:03.040734 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:03.040844 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:03.042461 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:55:03.042497 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:03.043067 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:03.045154 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:03.045700 | orchestrator | 2025-07-12 19:55:03 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:03.045733 | orchestrator | 2025-07-12 19:55:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:06.119387 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:06.119482 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:06.120026 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:55:06.120051 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:06.121062 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:06.121250 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:06.123095 | orchestrator | 2025-07-12 19:55:06 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:06.123130 | orchestrator | 2025-07-12 19:55:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:09.176039 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:09.178708 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:09.182380 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state STARTED 2025-07-12 19:55:09.182410 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:09.182948 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:09.186157 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:09.188328 | orchestrator | 2025-07-12 19:55:09 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:09.188359 | orchestrator | 2025-07-12 19:55:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:12.248987 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:12.255468 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:12.257781 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task b516aaf4-fccf-4712-bec8-332ffe83a680 is in state SUCCESS 2025-07-12 19:55:12.258646 | orchestrator | 2025-07-12 19:55:12.258728 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-12 19:55:12.258752 | orchestrator | 2025-07-12 19:55:12.258764 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-12 19:55:12.258785 | orchestrator | Saturday 12 July 2025 19:54:55 +0000 (0:00:01.004) 0:00:01.004 ********* 2025-07-12 19:55:12.258797 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:12.258809 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:55:12.258832 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:55:12.258843 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:55:12.258854 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:55:12.258865 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:55:12.258876 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:55:12.258898 | orchestrator | 2025-07-12 19:55:12.258909 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-12 19:55:12.258921 | orchestrator | Saturday 12 July 2025 19:54:59 +0000 (0:00:04.093) 0:00:05.097 ********* 2025-07-12 19:55:12.258943 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 19:55:12.258955 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 19:55:12.258966 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 19:55:12.258977 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 19:55:12.258999 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 19:55:12.259010 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 19:55:12.259021 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 19:55:12.259032 | orchestrator | 2025-07-12 19:55:12.259050 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-12 19:55:12.259095 | orchestrator | Saturday 12 July 2025 19:55:01 +0000 (0:00:01.789) 0:00:06.887 ********* 2025-07-12 19:55:12.259111 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.214741', 'end': '2025-07-12 19:55:00.222923', 'delta': '0:00:00.008182', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259131 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.157975', 'end': '2025-07-12 19:55:00.161486', 'delta': '0:00:00.003511', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259156 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.318495', 'end': '2025-07-12 19:55:00.327487', 'delta': '0:00:00.008992', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259183 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.462616', 'end': '2025-07-12 19:55:00.470884', 'delta': '0:00:00.008268', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259200 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.615699', 'end': '2025-07-12 19:55:00.622439', 'delta': '0:00:00.006740', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259226 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.728089', 'end': '2025-07-12 19:55:00.734631', 'delta': '0:00:00.006542', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259239 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 19:55:00.746532', 'end': '2025-07-12 19:55:00.751478', 'delta': '0:00:00.004946', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 19:55:12.259250 | orchestrator | 2025-07-12 19:55:12.259262 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-12 19:55:12.259284 | orchestrator | Saturday 12 July 2025 19:55:03 +0000 (0:00:02.498) 0:00:09.385 ********* 2025-07-12 19:55:12.259296 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 19:55:12.259307 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 19:55:12.259328 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 19:55:12.259339 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 19:55:12.259360 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 19:55:12.259371 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 19:55:12.259382 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 19:55:12.259392 | orchestrator | 2025-07-12 19:55:12.259403 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-12 19:55:12.259414 | orchestrator | Saturday 12 July 2025 19:55:05 +0000 (0:00:01.904) 0:00:11.290 ********* 2025-07-12 19:55:12.259425 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-12 19:55:12.259436 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 19:55:12.259447 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 19:55:12.259468 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 19:55:12.259479 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 19:55:12.259490 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 19:55:12.259501 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 19:55:12.259512 | orchestrator | 2025-07-12 19:55:12.259533 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:55:12.259552 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259564 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259582 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259593 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259604 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259615 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259626 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:12.259637 | orchestrator | 2025-07-12 19:55:12.259648 | orchestrator | 2025-07-12 19:55:12.259692 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:55:12.259704 | orchestrator | Saturday 12 July 2025 19:55:09 +0000 (0:00:03.417) 0:00:14.707 ********* 2025-07-12 19:55:12.259715 | orchestrator | =============================================================================== 2025-07-12 19:55:12.259726 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.09s 2025-07-12 19:55:12.259748 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.42s 2025-07-12 19:55:12.259759 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.50s 2025-07-12 19:55:12.259770 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.90s 2025-07-12 19:55:12.259782 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.79s 2025-07-12 19:55:12.263166 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:12.264940 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:12.266801 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:12.269958 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:12.271020 | orchestrator | 2025-07-12 19:55:12 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:12.271046 | orchestrator | 2025-07-12 19:55:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:15.314489 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:15.314578 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:15.314592 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:15.314604 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:15.314615 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:15.315023 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:15.316866 | orchestrator | 2025-07-12 19:55:15 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:15.316910 | orchestrator | 2025-07-12 19:55:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:18.369582 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:18.369788 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:18.369808 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:18.369820 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:18.369831 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:18.369842 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:18.369854 | orchestrator | 2025-07-12 19:55:18 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:18.369865 | orchestrator | 2025-07-12 19:55:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:21.401392 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:21.402822 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:21.404716 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:21.407361 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:21.407408 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:21.407420 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:21.408823 | orchestrator | 2025-07-12 19:55:21 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:21.408862 | orchestrator | 2025-07-12 19:55:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:24.456295 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:24.457294 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:24.457343 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:24.458566 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:24.460680 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:24.460732 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:24.461995 | orchestrator | 2025-07-12 19:55:24 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:24.462196 | orchestrator | 2025-07-12 19:55:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:27.513083 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:27.514155 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state STARTED 2025-07-12 19:55:27.518907 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:27.519469 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:27.522497 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:27.522562 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:27.523320 | orchestrator | 2025-07-12 19:55:27 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:27.523344 | orchestrator | 2025-07-12 19:55:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:30.566918 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:30.567005 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task c86faba4-c8de-471a-93a0-d8b8d12e7153 is in state SUCCESS 2025-07-12 19:55:30.568829 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:30.569556 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:30.570583 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:30.573924 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:30.574323 | orchestrator | 2025-07-12 19:55:30 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:30.574355 | orchestrator | 2025-07-12 19:55:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:33.615261 | orchestrator | 2025-07-12 19:55:33 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:33.617229 | orchestrator | 2025-07-12 19:55:33 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:33.620938 | orchestrator | 2025-07-12 19:55:33 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:33.624453 | orchestrator | 2025-07-12 19:55:33 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:33.624751 | orchestrator | 2025-07-12 19:55:33 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:33.625477 | orchestrator | 2025-07-12 19:55:33 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:33.625798 | orchestrator | 2025-07-12 19:55:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:36.699301 | orchestrator | 2025-07-12 19:55:36 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:36.699756 | orchestrator | 2025-07-12 19:55:36 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state STARTED 2025-07-12 19:55:36.700051 | orchestrator | 2025-07-12 19:55:36 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:36.705012 | orchestrator | 2025-07-12 19:55:36 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:36.705275 | orchestrator | 2025-07-12 19:55:36 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:36.713063 | orchestrator | 2025-07-12 19:55:36 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:36.713097 | orchestrator | 2025-07-12 19:55:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:39.746457 | orchestrator | 2025-07-12 19:55:39 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:39.747796 | orchestrator | 2025-07-12 19:55:39 | INFO  | Task 8e47d045-e75a-417a-ba90-90fc7f029e73 is in state SUCCESS 2025-07-12 19:55:39.750319 | orchestrator | 2025-07-12 19:55:39 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:39.751347 | orchestrator | 2025-07-12 19:55:39 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:39.752194 | orchestrator | 2025-07-12 19:55:39 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:39.753244 | orchestrator | 2025-07-12 19:55:39 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:39.753391 | orchestrator | 2025-07-12 19:55:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:42.815736 | orchestrator | 2025-07-12 19:55:42 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:42.817300 | orchestrator | 2025-07-12 19:55:42 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:42.821010 | orchestrator | 2025-07-12 19:55:42 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:42.822917 | orchestrator | 2025-07-12 19:55:42 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:42.824929 | orchestrator | 2025-07-12 19:55:42 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:42.824961 | orchestrator | 2025-07-12 19:55:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:45.886797 | orchestrator | 2025-07-12 19:55:45 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:45.890247 | orchestrator | 2025-07-12 19:55:45 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:45.891567 | orchestrator | 2025-07-12 19:55:45 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:45.891600 | orchestrator | 2025-07-12 19:55:45 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:45.896631 | orchestrator | 2025-07-12 19:55:45 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:45.898295 | orchestrator | 2025-07-12 19:55:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:48.949153 | orchestrator | 2025-07-12 19:55:48 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:48.950476 | orchestrator | 2025-07-12 19:55:48 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:48.950573 | orchestrator | 2025-07-12 19:55:48 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:48.954180 | orchestrator | 2025-07-12 19:55:48 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:48.959401 | orchestrator | 2025-07-12 19:55:48 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:48.959458 | orchestrator | 2025-07-12 19:55:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:52.039841 | orchestrator | 2025-07-12 19:55:52 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:52.041621 | orchestrator | 2025-07-12 19:55:52 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:52.043961 | orchestrator | 2025-07-12 19:55:52 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state STARTED 2025-07-12 19:55:52.045383 | orchestrator | 2025-07-12 19:55:52 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:52.046979 | orchestrator | 2025-07-12 19:55:52 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:52.047246 | orchestrator | 2025-07-12 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:55.098939 | orchestrator | 2025-07-12 19:55:55 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:55.105563 | orchestrator | 2025-07-12 19:55:55 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:55.105923 | orchestrator | 2025-07-12 19:55:55 | INFO  | Task 712decd4-77a3-44b8-8e92-c006e211f98c is in state SUCCESS 2025-07-12 19:55:55.107996 | orchestrator | 2025-07-12 19:55:55.108050 | orchestrator | 2025-07-12 19:55:55.108064 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-12 19:55:55.108077 | orchestrator | 2025-07-12 19:55:55.108089 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-12 19:55:55.108101 | orchestrator | Saturday 12 July 2025 19:54:54 +0000 (0:00:00.582) 0:00:00.582 ********* 2025-07-12 19:55:55.108113 | orchestrator | ok: [testbed-manager] => { 2025-07-12 19:55:55.108125 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-12 19:55:55.108138 | orchestrator | } 2025-07-12 19:55:55.108150 | orchestrator | 2025-07-12 19:55:55.108163 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-12 19:55:55.108174 | orchestrator | Saturday 12 July 2025 19:54:55 +0000 (0:00:00.454) 0:00:01.037 ********* 2025-07-12 19:55:55.108185 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.108197 | orchestrator | 2025-07-12 19:55:55.108208 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-12 19:55:55.108219 | orchestrator | Saturday 12 July 2025 19:54:57 +0000 (0:00:01.784) 0:00:02.821 ********* 2025-07-12 19:55:55.108230 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-12 19:55:55.108241 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-12 19:55:55.108276 | orchestrator | 2025-07-12 19:55:55.108294 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-12 19:55:55.108306 | orchestrator | Saturday 12 July 2025 19:54:58 +0000 (0:00:01.262) 0:00:04.084 ********* 2025-07-12 19:55:55.108317 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.108328 | orchestrator | 2025-07-12 19:55:55.108339 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-12 19:55:55.108349 | orchestrator | Saturday 12 July 2025 19:55:00 +0000 (0:00:01.873) 0:00:05.957 ********* 2025-07-12 19:55:55.108360 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.108371 | orchestrator | 2025-07-12 19:55:55.108382 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-12 19:55:55.108393 | orchestrator | Saturday 12 July 2025 19:55:01 +0000 (0:00:01.393) 0:00:07.350 ********* 2025-07-12 19:55:55.108404 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-12 19:55:55.108415 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.108426 | orchestrator | 2025-07-12 19:55:55.108437 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-12 19:55:55.108448 | orchestrator | Saturday 12 July 2025 19:55:26 +0000 (0:00:24.466) 0:00:31.817 ********* 2025-07-12 19:55:55.108458 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.108469 | orchestrator | 2025-07-12 19:55:55.108480 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:55:55.108492 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.108504 | orchestrator | 2025-07-12 19:55:55.108515 | orchestrator | 2025-07-12 19:55:55.108526 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:55:55.108537 | orchestrator | Saturday 12 July 2025 19:55:27 +0000 (0:00:01.903) 0:00:33.720 ********* 2025-07-12 19:55:55.108548 | orchestrator | =============================================================================== 2025-07-12 19:55:55.108560 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.47s 2025-07-12 19:55:55.108591 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.90s 2025-07-12 19:55:55.108604 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.87s 2025-07-12 19:55:55.108617 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.78s 2025-07-12 19:55:55.108629 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.39s 2025-07-12 19:55:55.108642 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.26s 2025-07-12 19:55:55.108654 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.45s 2025-07-12 19:55:55.108666 | orchestrator | 2025-07-12 19:55:55.108702 | orchestrator | 2025-07-12 19:55:55.108714 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-12 19:55:55.108726 | orchestrator | 2025-07-12 19:55:55.108739 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-12 19:55:55.108751 | orchestrator | Saturday 12 July 2025 19:54:55 +0000 (0:00:00.836) 0:00:00.836 ********* 2025-07-12 19:55:55.108764 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-12 19:55:55.108776 | orchestrator | 2025-07-12 19:55:55.108787 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-12 19:55:55.108798 | orchestrator | Saturday 12 July 2025 19:54:56 +0000 (0:00:00.999) 0:00:01.835 ********* 2025-07-12 19:55:55.108808 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-12 19:55:55.108819 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-12 19:55:55.108830 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-12 19:55:55.108841 | orchestrator | 2025-07-12 19:55:55.108852 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-12 19:55:55.108863 | orchestrator | Saturday 12 July 2025 19:54:58 +0000 (0:00:01.563) 0:00:03.399 ********* 2025-07-12 19:55:55.108874 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.108884 | orchestrator | 2025-07-12 19:55:55.108895 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-12 19:55:55.108906 | orchestrator | Saturday 12 July 2025 19:54:59 +0000 (0:00:01.394) 0:00:04.794 ********* 2025-07-12 19:55:55.108930 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-12 19:55:55.108941 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.108952 | orchestrator | 2025-07-12 19:55:55.108963 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-12 19:55:55.108974 | orchestrator | Saturday 12 July 2025 19:55:32 +0000 (0:00:32.978) 0:00:37.772 ********* 2025-07-12 19:55:55.108985 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.108995 | orchestrator | 2025-07-12 19:55:55.109006 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-12 19:55:55.109017 | orchestrator | Saturday 12 July 2025 19:55:33 +0000 (0:00:00.902) 0:00:38.674 ********* 2025-07-12 19:55:55.109028 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.109039 | orchestrator | 2025-07-12 19:55:55.109050 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-12 19:55:55.109060 | orchestrator | Saturday 12 July 2025 19:55:34 +0000 (0:00:00.895) 0:00:39.569 ********* 2025-07-12 19:55:55.109071 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.109082 | orchestrator | 2025-07-12 19:55:55.109093 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-12 19:55:55.109104 | orchestrator | Saturday 12 July 2025 19:55:36 +0000 (0:00:02.272) 0:00:41.841 ********* 2025-07-12 19:55:55.109114 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.109125 | orchestrator | 2025-07-12 19:55:55.109136 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-12 19:55:55.109152 | orchestrator | Saturday 12 July 2025 19:55:37 +0000 (0:00:00.884) 0:00:42.726 ********* 2025-07-12 19:55:55.109170 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.109181 | orchestrator | 2025-07-12 19:55:55.109192 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-12 19:55:55.109203 | orchestrator | Saturday 12 July 2025 19:55:38 +0000 (0:00:00.771) 0:00:43.498 ********* 2025-07-12 19:55:55.109214 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.109225 | orchestrator | 2025-07-12 19:55:55.109236 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:55:55.109247 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.109258 | orchestrator | 2025-07-12 19:55:55.109268 | orchestrator | 2025-07-12 19:55:55.109279 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:55:55.109290 | orchestrator | Saturday 12 July 2025 19:55:38 +0000 (0:00:00.322) 0:00:43.820 ********* 2025-07-12 19:55:55.109301 | orchestrator | =============================================================================== 2025-07-12 19:55:55.109312 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 32.98s 2025-07-12 19:55:55.109322 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.27s 2025-07-12 19:55:55.109333 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.56s 2025-07-12 19:55:55.109344 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.39s 2025-07-12 19:55:55.109355 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.00s 2025-07-12 19:55:55.109366 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.90s 2025-07-12 19:55:55.109376 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.90s 2025-07-12 19:55:55.109387 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.88s 2025-07-12 19:55:55.109398 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.77s 2025-07-12 19:55:55.109409 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.32s 2025-07-12 19:55:55.109420 | orchestrator | 2025-07-12 19:55:55.109430 | orchestrator | 2025-07-12 19:55:55.109441 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 19:55:55.109452 | orchestrator | 2025-07-12 19:55:55.109463 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 19:55:55.109474 | orchestrator | Saturday 12 July 2025 19:54:54 +0000 (0:00:00.800) 0:00:00.800 ********* 2025-07-12 19:55:55.109484 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-12 19:55:55.109495 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-12 19:55:55.109506 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-12 19:55:55.109516 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-12 19:55:55.109527 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-12 19:55:55.109538 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-12 19:55:55.109549 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-12 19:55:55.109560 | orchestrator | 2025-07-12 19:55:55.109572 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-12 19:55:55.109591 | orchestrator | 2025-07-12 19:55:55.109609 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-12 19:55:55.109628 | orchestrator | Saturday 12 July 2025 19:54:57 +0000 (0:00:02.491) 0:00:03.291 ********* 2025-07-12 19:55:55.109664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:55:55.109716 | orchestrator | 2025-07-12 19:55:55.109728 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-12 19:55:55.109746 | orchestrator | Saturday 12 July 2025 19:54:58 +0000 (0:00:01.423) 0:00:04.714 ********* 2025-07-12 19:55:55.109758 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.109769 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:55:55.109780 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:55:55.109791 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:55:55.109802 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:55:55.109820 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:55:55.109831 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:55:55.109842 | orchestrator | 2025-07-12 19:55:55.109853 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-12 19:55:55.109864 | orchestrator | Saturday 12 July 2025 19:55:00 +0000 (0:00:02.094) 0:00:06.809 ********* 2025-07-12 19:55:55.109875 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.109886 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:55:55.109897 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:55:55.109908 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:55:55.109919 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:55:55.109930 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:55:55.109941 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:55:55.109952 | orchestrator | 2025-07-12 19:55:55.109963 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-12 19:55:55.109974 | orchestrator | Saturday 12 July 2025 19:55:04 +0000 (0:00:03.789) 0:00:10.598 ********* 2025-07-12 19:55:55.109985 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.109996 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:55:55.110007 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:55:55.110086 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:55:55.110101 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:55:55.110112 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:55:55.110123 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:55:55.110134 | orchestrator | 2025-07-12 19:55:55.110145 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-12 19:55:55.110161 | orchestrator | Saturday 12 July 2025 19:55:07 +0000 (0:00:02.900) 0:00:13.499 ********* 2025-07-12 19:55:55.110173 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.110184 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:55:55.110195 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:55:55.110205 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:55:55.110216 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:55:55.110227 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:55:55.110238 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:55:55.110249 | orchestrator | 2025-07-12 19:55:55.110260 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-12 19:55:55.110271 | orchestrator | Saturday 12 July 2025 19:55:17 +0000 (0:00:09.653) 0:00:23.153 ********* 2025-07-12 19:55:55.110282 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.110293 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:55:55.110304 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:55:55.110315 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:55:55.110326 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:55:55.110337 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:55:55.110348 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:55:55.110359 | orchestrator | 2025-07-12 19:55:55.110370 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-12 19:55:55.110381 | orchestrator | Saturday 12 July 2025 19:55:32 +0000 (0:00:15.666) 0:00:38.819 ********* 2025-07-12 19:55:55.110393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:55:55.110407 | orchestrator | 2025-07-12 19:55:55.110417 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-12 19:55:55.110428 | orchestrator | Saturday 12 July 2025 19:55:34 +0000 (0:00:01.766) 0:00:40.585 ********* 2025-07-12 19:55:55.110446 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-12 19:55:55.110457 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-12 19:55:55.110468 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-12 19:55:55.110479 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-12 19:55:55.110490 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-12 19:55:55.110501 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-12 19:55:55.110512 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-12 19:55:55.110523 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-12 19:55:55.110533 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-12 19:55:55.110544 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-12 19:55:55.110555 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-12 19:55:55.110566 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-12 19:55:55.110577 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-12 19:55:55.110588 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-12 19:55:55.110598 | orchestrator | 2025-07-12 19:55:55.110609 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-12 19:55:55.110621 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:04.730) 0:00:45.316 ********* 2025-07-12 19:55:55.110632 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.110643 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:55:55.110654 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:55:55.110664 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:55:55.110713 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:55:55.110725 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:55:55.110736 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:55:55.110747 | orchestrator | 2025-07-12 19:55:55.110758 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-12 19:55:55.110769 | orchestrator | Saturday 12 July 2025 19:55:40 +0000 (0:00:00.976) 0:00:46.292 ********* 2025-07-12 19:55:55.110780 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.110791 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:55:55.110802 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:55:55.110813 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:55:55.110824 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:55:55.110835 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:55:55.110846 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:55:55.110857 | orchestrator | 2025-07-12 19:55:55.110868 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-12 19:55:55.110888 | orchestrator | Saturday 12 July 2025 19:55:41 +0000 (0:00:01.562) 0:00:47.855 ********* 2025-07-12 19:55:55.110899 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.110910 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:55:55.110921 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:55:55.110932 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:55:55.110943 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:55:55.110953 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:55:55.110964 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:55:55.110975 | orchestrator | 2025-07-12 19:55:55.110986 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-12 19:55:55.110997 | orchestrator | Saturday 12 July 2025 19:55:43 +0000 (0:00:01.487) 0:00:49.342 ********* 2025-07-12 19:55:55.111008 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:55:55.111019 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:55:55.111030 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:55:55.111040 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:55:55.111051 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:55:55.111061 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:55:55.111072 | orchestrator | ok: [testbed-manager] 2025-07-12 19:55:55.111083 | orchestrator | 2025-07-12 19:55:55.111094 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-12 19:55:55.111111 | orchestrator | Saturday 12 July 2025 19:55:45 +0000 (0:00:01.928) 0:00:51.270 ********* 2025-07-12 19:55:55.111123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-12 19:55:55.111140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:55:55.111152 | orchestrator | 2025-07-12 19:55:55.111163 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-12 19:55:55.111174 | orchestrator | Saturday 12 July 2025 19:55:47 +0000 (0:00:02.255) 0:00:53.526 ********* 2025-07-12 19:55:55.111185 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.111196 | orchestrator | 2025-07-12 19:55:55.111206 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-12 19:55:55.111217 | orchestrator | Saturday 12 July 2025 19:55:50 +0000 (0:00:02.574) 0:00:56.100 ********* 2025-07-12 19:55:55.111228 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:55:55.111239 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:55:55.111250 | orchestrator | changed: [testbed-manager] 2025-07-12 19:55:55.111261 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:55:55.111271 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:55:55.111282 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:55:55.111293 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:55:55.111304 | orchestrator | 2025-07-12 19:55:55.111315 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:55:55.111326 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111338 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111349 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111359 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111370 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111381 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111392 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:55:55.111403 | orchestrator | 2025-07-12 19:55:55.111414 | orchestrator | 2025-07-12 19:55:55.111425 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:55:55.111436 | orchestrator | Saturday 12 July 2025 19:55:53 +0000 (0:00:03.332) 0:00:59.433 ********* 2025-07-12 19:55:55.111447 | orchestrator | =============================================================================== 2025-07-12 19:55:55.111458 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 15.67s 2025-07-12 19:55:55.111469 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.65s 2025-07-12 19:55:55.111479 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.73s 2025-07-12 19:55:55.111490 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.79s 2025-07-12 19:55:55.111501 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.33s 2025-07-12 19:55:55.111512 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.90s 2025-07-12 19:55:55.111522 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.57s 2025-07-12 19:55:55.111539 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.49s 2025-07-12 19:55:55.111550 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.26s 2025-07-12 19:55:55.111561 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.09s 2025-07-12 19:55:55.111572 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.93s 2025-07-12 19:55:55.111588 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.77s 2025-07-12 19:55:55.111600 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.56s 2025-07-12 19:55:55.111611 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.49s 2025-07-12 19:55:55.111622 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.42s 2025-07-12 19:55:55.111632 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.98s 2025-07-12 19:55:55.111643 | orchestrator | 2025-07-12 19:55:55 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:55.111655 | orchestrator | 2025-07-12 19:55:55 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:55.111666 | orchestrator | 2025-07-12 19:55:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:55:58.148274 | orchestrator | 2025-07-12 19:55:58 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:55:58.149618 | orchestrator | 2025-07-12 19:55:58 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:55:58.150959 | orchestrator | 2025-07-12 19:55:58 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:55:58.152523 | orchestrator | 2025-07-12 19:55:58 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:55:58.152555 | orchestrator | 2025-07-12 19:55:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:01.192844 | orchestrator | 2025-07-12 19:56:01 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:01.194542 | orchestrator | 2025-07-12 19:56:01 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:01.196797 | orchestrator | 2025-07-12 19:56:01 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:01.198127 | orchestrator | 2025-07-12 19:56:01 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:01.198151 | orchestrator | 2025-07-12 19:56:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:04.252462 | orchestrator | 2025-07-12 19:56:04 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:04.253970 | orchestrator | 2025-07-12 19:56:04 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:04.255321 | orchestrator | 2025-07-12 19:56:04 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:04.256732 | orchestrator | 2025-07-12 19:56:04 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:04.257393 | orchestrator | 2025-07-12 19:56:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:07.301010 | orchestrator | 2025-07-12 19:56:07 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:07.301634 | orchestrator | 2025-07-12 19:56:07 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:07.302108 | orchestrator | 2025-07-12 19:56:07 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:07.303814 | orchestrator | 2025-07-12 19:56:07 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:07.303840 | orchestrator | 2025-07-12 19:56:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:10.358854 | orchestrator | 2025-07-12 19:56:10 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:10.359397 | orchestrator | 2025-07-12 19:56:10 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:10.360294 | orchestrator | 2025-07-12 19:56:10 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:10.361813 | orchestrator | 2025-07-12 19:56:10 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:10.362119 | orchestrator | 2025-07-12 19:56:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:13.407053 | orchestrator | 2025-07-12 19:56:13 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:13.414149 | orchestrator | 2025-07-12 19:56:13 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:13.414210 | orchestrator | 2025-07-12 19:56:13 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:13.414232 | orchestrator | 2025-07-12 19:56:13 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:13.414252 | orchestrator | 2025-07-12 19:56:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:16.455216 | orchestrator | 2025-07-12 19:56:16 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:16.455330 | orchestrator | 2025-07-12 19:56:16 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:16.455896 | orchestrator | 2025-07-12 19:56:16 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:16.457137 | orchestrator | 2025-07-12 19:56:16 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:16.457226 | orchestrator | 2025-07-12 19:56:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:19.515278 | orchestrator | 2025-07-12 19:56:19 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:19.520704 | orchestrator | 2025-07-12 19:56:19 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:19.523787 | orchestrator | 2025-07-12 19:56:19 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:19.526872 | orchestrator | 2025-07-12 19:56:19 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:19.527280 | orchestrator | 2025-07-12 19:56:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:22.572192 | orchestrator | 2025-07-12 19:56:22 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:22.573822 | orchestrator | 2025-07-12 19:56:22 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:22.575293 | orchestrator | 2025-07-12 19:56:22 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:22.577111 | orchestrator | 2025-07-12 19:56:22 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:22.577499 | orchestrator | 2025-07-12 19:56:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:25.620642 | orchestrator | 2025-07-12 19:56:25 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:25.620829 | orchestrator | 2025-07-12 19:56:25 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:25.622235 | orchestrator | 2025-07-12 19:56:25 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:25.624344 | orchestrator | 2025-07-12 19:56:25 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:25.624378 | orchestrator | 2025-07-12 19:56:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:28.672206 | orchestrator | 2025-07-12 19:56:28 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:28.673503 | orchestrator | 2025-07-12 19:56:28 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:28.675576 | orchestrator | 2025-07-12 19:56:28 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:28.677002 | orchestrator | 2025-07-12 19:56:28 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:28.677278 | orchestrator | 2025-07-12 19:56:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:31.714439 | orchestrator | 2025-07-12 19:56:31 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:31.716378 | orchestrator | 2025-07-12 19:56:31 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:31.717522 | orchestrator | 2025-07-12 19:56:31 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:31.717871 | orchestrator | 2025-07-12 19:56:31 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:31.718231 | orchestrator | 2025-07-12 19:56:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:34.763863 | orchestrator | 2025-07-12 19:56:34 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:34.765865 | orchestrator | 2025-07-12 19:56:34 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:34.770410 | orchestrator | 2025-07-12 19:56:34 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:34.773244 | orchestrator | 2025-07-12 19:56:34 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:34.773474 | orchestrator | 2025-07-12 19:56:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:37.821436 | orchestrator | 2025-07-12 19:56:37 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:37.822743 | orchestrator | 2025-07-12 19:56:37 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:37.824558 | orchestrator | 2025-07-12 19:56:37 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:37.828392 | orchestrator | 2025-07-12 19:56:37 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:37.829916 | orchestrator | 2025-07-12 19:56:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:40.905899 | orchestrator | 2025-07-12 19:56:40 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:40.908629 | orchestrator | 2025-07-12 19:56:40 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:40.910355 | orchestrator | 2025-07-12 19:56:40 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:40.912027 | orchestrator | 2025-07-12 19:56:40 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:40.912135 | orchestrator | 2025-07-12 19:56:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:43.961851 | orchestrator | 2025-07-12 19:56:43 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:43.965928 | orchestrator | 2025-07-12 19:56:43 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:43.967964 | orchestrator | 2025-07-12 19:56:43 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:43.969230 | orchestrator | 2025-07-12 19:56:43 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:43.970007 | orchestrator | 2025-07-12 19:56:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:47.007495 | orchestrator | 2025-07-12 19:56:47 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:47.009327 | orchestrator | 2025-07-12 19:56:47 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:47.011378 | orchestrator | 2025-07-12 19:56:47 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state STARTED 2025-07-12 19:56:47.012688 | orchestrator | 2025-07-12 19:56:47 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:47.013175 | orchestrator | 2025-07-12 19:56:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:50.042750 | orchestrator | 2025-07-12 19:56:50 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:50.043763 | orchestrator | 2025-07-12 19:56:50 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:50.043981 | orchestrator | 2025-07-12 19:56:50 | INFO  | Task 59b63e0b-d9ab-4b94-baa3-70fc668ddaba is in state SUCCESS 2025-07-12 19:56:50.045011 | orchestrator | 2025-07-12 19:56:50 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:50.045137 | orchestrator | 2025-07-12 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:53.076772 | orchestrator | 2025-07-12 19:56:53 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:53.077913 | orchestrator | 2025-07-12 19:56:53 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:53.079414 | orchestrator | 2025-07-12 19:56:53 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:53.079438 | orchestrator | 2025-07-12 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:56.122450 | orchestrator | 2025-07-12 19:56:56 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:56.124328 | orchestrator | 2025-07-12 19:56:56 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:56.128711 | orchestrator | 2025-07-12 19:56:56 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:56.128762 | orchestrator | 2025-07-12 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:56:59.166164 | orchestrator | 2025-07-12 19:56:59 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:56:59.166649 | orchestrator | 2025-07-12 19:56:59 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:56:59.167417 | orchestrator | 2025-07-12 19:56:59 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:56:59.167454 | orchestrator | 2025-07-12 19:56:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:02.209211 | orchestrator | 2025-07-12 19:57:02 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:02.211575 | orchestrator | 2025-07-12 19:57:02 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:02.211629 | orchestrator | 2025-07-12 19:57:02 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:57:02.211638 | orchestrator | 2025-07-12 19:57:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:05.251372 | orchestrator | 2025-07-12 19:57:05 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:05.252573 | orchestrator | 2025-07-12 19:57:05 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:05.253881 | orchestrator | 2025-07-12 19:57:05 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:57:05.254166 | orchestrator | 2025-07-12 19:57:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:08.298555 | orchestrator | 2025-07-12 19:57:08 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:08.299926 | orchestrator | 2025-07-12 19:57:08 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:08.301656 | orchestrator | 2025-07-12 19:57:08 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:57:08.301821 | orchestrator | 2025-07-12 19:57:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:11.326187 | orchestrator | 2025-07-12 19:57:11 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:11.326874 | orchestrator | 2025-07-12 19:57:11 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:11.327581 | orchestrator | 2025-07-12 19:57:11 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:57:11.327681 | orchestrator | 2025-07-12 19:57:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:14.370100 | orchestrator | 2025-07-12 19:57:14 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:14.372247 | orchestrator | 2025-07-12 19:57:14 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:14.375504 | orchestrator | 2025-07-12 19:57:14 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:57:14.375652 | orchestrator | 2025-07-12 19:57:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:17.419673 | orchestrator | 2025-07-12 19:57:17 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:17.421297 | orchestrator | 2025-07-12 19:57:17 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:17.422556 | orchestrator | 2025-07-12 19:57:17 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state STARTED 2025-07-12 19:57:17.422585 | orchestrator | 2025-07-12 19:57:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:20.452215 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:20.456447 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:20.456489 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:20.456499 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:20.456509 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:20.459313 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:20.463660 | orchestrator | 2025-07-12 19:57:20 | INFO  | Task 11a5f7f8-49f6-44bf-8f41-c1f454dcb962 is in state SUCCESS 2025-07-12 19:57:20.466387 | orchestrator | 2025-07-12 19:57:20.466417 | orchestrator | 2025-07-12 19:57:20.466427 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-12 19:57:20.466438 | orchestrator | 2025-07-12 19:57:20.466448 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-12 19:57:20.466458 | orchestrator | Saturday 12 July 2025 19:55:14 +0000 (0:00:00.240) 0:00:00.241 ********* 2025-07-12 19:57:20.466468 | orchestrator | ok: [testbed-manager] 2025-07-12 19:57:20.466477 | orchestrator | 2025-07-12 19:57:20.466537 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-12 19:57:20.466621 | orchestrator | Saturday 12 July 2025 19:55:15 +0000 (0:00:00.845) 0:00:01.086 ********* 2025-07-12 19:57:20.466633 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-12 19:57:20.466642 | orchestrator | 2025-07-12 19:57:20.466650 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-12 19:57:20.466659 | orchestrator | Saturday 12 July 2025 19:55:16 +0000 (0:00:00.736) 0:00:01.823 ********* 2025-07-12 19:57:20.466667 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.466676 | orchestrator | 2025-07-12 19:57:20.466721 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-12 19:57:20.466731 | orchestrator | Saturday 12 July 2025 19:55:18 +0000 (0:00:01.787) 0:00:03.611 ********* 2025-07-12 19:57:20.466740 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-12 19:57:20.466749 | orchestrator | ok: [testbed-manager] 2025-07-12 19:57:20.466797 | orchestrator | 2025-07-12 19:57:20.466809 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-12 19:57:20.468148 | orchestrator | Saturday 12 July 2025 19:56:29 +0000 (0:01:11.449) 0:01:15.060 ********* 2025-07-12 19:57:20.468183 | orchestrator | fatal: [testbed-manager]: FAILED! => {"msg": "The conditional check 'result[\"status\"][\"ActiveState\"] == \"active\"' failed. The error was: error while evaluating conditional (result[\"status\"][\"ActiveState\"] == \"active\"): 'dict object' has no attribute 'status'"} 2025-07-12 19:57:20.468198 | orchestrator | 2025-07-12 19:57:20.468208 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:57:20.468217 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-12 19:57:20.468226 | orchestrator | 2025-07-12 19:57:20.468235 | orchestrator | 2025-07-12 19:57:20.468244 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:57:20.468253 | orchestrator | Saturday 12 July 2025 19:56:47 +0000 (0:00:17.397) 0:01:32.457 ********* 2025-07-12 19:57:20.468261 | orchestrator | =============================================================================== 2025-07-12 19:57:20.468270 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 71.45s 2025-07-12 19:57:20.468278 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 17.40s 2025-07-12 19:57:20.468287 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.79s 2025-07-12 19:57:20.468296 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.85s 2025-07-12 19:57:20.468304 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.74s 2025-07-12 19:57:20.468313 | orchestrator | 2025-07-12 19:57:20.468321 | orchestrator | 2025-07-12 19:57:20.468330 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-12 19:57:20.468339 | orchestrator | 2025-07-12 19:57:20.468347 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 19:57:20.468356 | orchestrator | Saturday 12 July 2025 19:54:47 +0000 (0:00:00.211) 0:00:00.211 ********* 2025-07-12 19:57:20.468364 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:57:20.468389 | orchestrator | 2025-07-12 19:57:20.468399 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-12 19:57:20.468407 | orchestrator | Saturday 12 July 2025 19:54:48 +0000 (0:00:01.107) 0:00:01.318 ********* 2025-07-12 19:57:20.468416 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468425 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468433 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468442 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468450 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468480 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468499 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468508 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468517 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468527 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468535 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468544 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468552 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 19:57:20.468681 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468696 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468705 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468797 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468810 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 19:57:20.468835 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468845 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468855 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 19:57:20.468865 | orchestrator | 2025-07-12 19:57:20.468874 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 19:57:20.468884 | orchestrator | Saturday 12 July 2025 19:54:52 +0000 (0:00:04.323) 0:00:05.641 ********* 2025-07-12 19:57:20.468894 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:57:20.468905 | orchestrator | 2025-07-12 19:57:20.468914 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-12 19:57:20.468924 | orchestrator | Saturday 12 July 2025 19:54:54 +0000 (0:00:01.337) 0:00:06.979 ********* 2025-07-12 19:57:20.468942 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.468963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.468975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.468985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.468995 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.469052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.469077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.469114 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469223 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469233 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.469260 | orchestrator | 2025-07-12 19:57:20.469270 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-12 19:57:20.469279 | orchestrator | Saturday 12 July 2025 19:54:58 +0000 (0:00:04.758) 0:00:11.737 ********* 2025-07-12 19:57:20.469328 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469339 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469359 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469369 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:57:20.469378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469406 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:57:20.469415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:57:20.469514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469532 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469541 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:57:20.469550 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:57:20.469566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469631 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:57:20.469669 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469697 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:57:20.469706 | orchestrator | 2025-07-12 19:57:20.469715 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-12 19:57:20.469724 | orchestrator | Saturday 12 July 2025 19:55:00 +0000 (0:00:01.212) 0:00:12.950 ********* 2025-07-12 19:57:20.469894 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469911 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469936 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:57:20.469945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.469976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:57:20.469985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.469994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.470093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470112 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:57:20.470124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.470134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470151 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:57:20.470160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.470169 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:57:20.470179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470209 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:57:20.470218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 19:57:20.470230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.470249 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:57:20.470257 | orchestrator | 2025-07-12 19:57:20.470266 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-12 19:57:20.470275 | orchestrator | Saturday 12 July 2025 19:55:02 +0000 (0:00:02.352) 0:00:15.303 ********* 2025-07-12 19:57:20.470284 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:57:20.470293 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:57:20.470301 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:57:20.470310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:57:20.470319 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:57:20.470327 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:57:20.470336 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:57:20.470344 | orchestrator | 2025-07-12 19:57:20.470353 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-12 19:57:20.470362 | orchestrator | Saturday 12 July 2025 19:55:03 +0000 (0:00:00.924) 0:00:16.228 ********* 2025-07-12 19:57:20.470371 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:57:20.470379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:57:20.470388 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:57:20.470396 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:57:20.470405 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:57:20.470422 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:57:20.470431 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:57:20.470445 | orchestrator | 2025-07-12 19:57:20.470454 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-12 19:57:20.470462 | orchestrator | Saturday 12 July 2025 19:55:04 +0000 (0:00:01.225) 0:00:17.453 ********* 2025-07-12 19:57:20.470471 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470555 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470604 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470629 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470777 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.470787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470887 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.470918 | orchestrator | 2025-07-12 19:57:20.470927 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-12 19:57:20.470935 | orchestrator | Saturday 12 July 2025 19:55:10 +0000 (0:00:05.649) 0:00:23.103 ********* 2025-07-12 19:57:20.470945 | orchestrator | [WARNING]: Skipped 2025-07-12 19:57:20.470954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-12 19:57:20.470963 | orchestrator | to this access issue: 2025-07-12 19:57:20.470972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-12 19:57:20.470986 | orchestrator | directory 2025-07-12 19:57:20.470995 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:57:20.471004 | orchestrator | 2025-07-12 19:57:20.471013 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-12 19:57:20.471022 | orchestrator | Saturday 12 July 2025 19:55:12 +0000 (0:00:01.802) 0:00:24.905 ********* 2025-07-12 19:57:20.471031 | orchestrator | [WARNING]: Skipped 2025-07-12 19:57:20.471039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-12 19:57:20.471048 | orchestrator | to this access issue: 2025-07-12 19:57:20.471056 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-12 19:57:20.471065 | orchestrator | directory 2025-07-12 19:57:20.471074 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:57:20.471082 | orchestrator | 2025-07-12 19:57:20.471091 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-12 19:57:20.471100 | orchestrator | Saturday 12 July 2025 19:55:13 +0000 (0:00:01.391) 0:00:26.297 ********* 2025-07-12 19:57:20.471108 | orchestrator | [WARNING]: Skipped 2025-07-12 19:57:20.471117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-12 19:57:20.471126 | orchestrator | to this access issue: 2025-07-12 19:57:20.471134 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-12 19:57:20.471143 | orchestrator | directory 2025-07-12 19:57:20.471152 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:57:20.471160 | orchestrator | 2025-07-12 19:57:20.471169 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-12 19:57:20.471178 | orchestrator | Saturday 12 July 2025 19:55:14 +0000 (0:00:00.780) 0:00:27.078 ********* 2025-07-12 19:57:20.471186 | orchestrator | [WARNING]: Skipped 2025-07-12 19:57:20.471195 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-12 19:57:20.471203 | orchestrator | to this access issue: 2025-07-12 19:57:20.471212 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-12 19:57:20.471221 | orchestrator | directory 2025-07-12 19:57:20.471229 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 19:57:20.471238 | orchestrator | 2025-07-12 19:57:20.471246 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-12 19:57:20.471255 | orchestrator | Saturday 12 July 2025 19:55:14 +0000 (0:00:00.714) 0:00:27.793 ********* 2025-07-12 19:57:20.471264 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.471272 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.471281 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.471289 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.471298 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.471306 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.471315 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.471323 | orchestrator | 2025-07-12 19:57:20.471332 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-12 19:57:20.471341 | orchestrator | Saturday 12 July 2025 19:55:19 +0000 (0:00:04.800) 0:00:32.594 ********* 2025-07-12 19:57:20.471353 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471363 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471372 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471380 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471389 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471406 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471420 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 19:57:20.471428 | orchestrator | 2025-07-12 19:57:20.471437 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-12 19:57:20.471446 | orchestrator | Saturday 12 July 2025 19:55:23 +0000 (0:00:03.312) 0:00:35.906 ********* 2025-07-12 19:57:20.471454 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.471463 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.471471 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.471480 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.471488 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.471521 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.471530 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.471539 | orchestrator | 2025-07-12 19:57:20.471548 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-12 19:57:20.471556 | orchestrator | Saturday 12 July 2025 19:55:25 +0000 (0:00:02.691) 0:00:38.598 ********* 2025-07-12 19:57:20.471569 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.471579 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.471588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.471598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.471607 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.471661 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.471678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.471937 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.471953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.471962 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.471972 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.471981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.471995 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472014 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472023 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.472036 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472045 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472055 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472064 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 19:57:20.472082 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472096 | orchestrator | 2025-07-12 19:57:20.472105 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-12 19:57:20.472113 | orchestrator | Saturday 12 July 2025 19:55:28 +0000 (0:00:02.497) 0:00:41.095 ********* 2025-07-12 19:57:20.472126 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472136 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472144 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472153 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472162 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472170 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472179 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 19:57:20.472187 | orchestrator | 2025-07-12 19:57:20.472196 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-12 19:57:20.472205 | orchestrator | Saturday 12 July 2025 19:55:30 +0000 (0:00:02.218) 0:00:43.314 ********* 2025-07-12 19:57:20.472214 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472222 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472231 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472240 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472248 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472257 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472269 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 19:57:20.472278 | orchestrator | 2025-07-12 19:57:20.472287 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-12 19:57:20.472295 | orchestrator | Saturday 12 July 2025 19:55:33 +0000 (0:00:02.707) 0:00:46.022 ********* 2025-07-12 19:57:20.472304 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472337 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472380 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 19:57:20.472449 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472545 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472560 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472569 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472631 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 19:57:20.472640 | orchestrator | 2025-07-12 19:57:20.472744 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-12 19:57:20.472753 | orchestrator | Saturday 12 July 2025 19:55:36 +0000 (0:00:03.144) 0:00:49.166 ********* 2025-07-12 19:57:20.472762 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.472771 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.472780 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.472788 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.472797 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.472806 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.472814 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.472863 | orchestrator | 2025-07-12 19:57:20.472873 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-12 19:57:20.472881 | orchestrator | Saturday 12 July 2025 19:55:38 +0000 (0:00:01.709) 0:00:50.875 ********* 2025-07-12 19:57:20.472890 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.472898 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.472907 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.472915 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.472924 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.472932 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.472941 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.472949 | orchestrator | 2025-07-12 19:57:20.472958 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.472972 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:01.282) 0:00:52.158 ********* 2025-07-12 19:57:20.472981 | orchestrator | 2025-07-12 19:57:20.472989 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.472998 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.188) 0:00:52.346 ********* 2025-07-12 19:57:20.473007 | orchestrator | 2025-07-12 19:57:20.473031 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.473040 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.061) 0:00:52.408 ********* 2025-07-12 19:57:20.473054 | orchestrator | 2025-07-12 19:57:20.473063 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.473072 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.057) 0:00:52.466 ********* 2025-07-12 19:57:20.473080 | orchestrator | 2025-07-12 19:57:20.473089 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.473097 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.059) 0:00:52.525 ********* 2025-07-12 19:57:20.473106 | orchestrator | 2025-07-12 19:57:20.473115 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.473123 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.059) 0:00:52.585 ********* 2025-07-12 19:57:20.473132 | orchestrator | 2025-07-12 19:57:20.473140 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 19:57:20.473149 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.059) 0:00:52.644 ********* 2025-07-12 19:57:20.473158 | orchestrator | 2025-07-12 19:57:20.473166 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-12 19:57:20.473175 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:00.079) 0:00:52.724 ********* 2025-07-12 19:57:20.473183 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.473192 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.473200 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.473209 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.473217 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.473226 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.473234 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.473243 | orchestrator | 2025-07-12 19:57:20.473251 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-12 19:57:20.473260 | orchestrator | Saturday 12 July 2025 19:56:22 +0000 (0:00:43.017) 0:01:35.741 ********* 2025-07-12 19:57:20.473269 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.473277 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.473286 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.473294 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.473302 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.473311 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.473327 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.473336 | orchestrator | 2025-07-12 19:57:20.473345 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-12 19:57:20.473354 | orchestrator | Saturday 12 July 2025 19:57:06 +0000 (0:00:43.491) 0:02:19.233 ********* 2025-07-12 19:57:20.473362 | orchestrator | ok: [testbed-manager] 2025-07-12 19:57:20.473371 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:57:20.473402 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:57:20.473413 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:57:20.473422 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:20.473431 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:20.473440 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:20.473484 | orchestrator | 2025-07-12 19:57:20.473494 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-12 19:57:20.473502 | orchestrator | Saturday 12 July 2025 19:57:08 +0000 (0:00:01.957) 0:02:21.190 ********* 2025-07-12 19:57:20.473511 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:20.473671 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:20.473681 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:20.473690 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:20.473698 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:20.473706 | orchestrator | changed: [testbed-manager] 2025-07-12 19:57:20.473715 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:20.473724 | orchestrator | 2025-07-12 19:57:20.473732 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:57:20.473741 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473762 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473772 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473781 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473790 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473799 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473808 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 19:57:20.473828 | orchestrator | 2025-07-12 19:57:20.473837 | orchestrator | 2025-07-12 19:57:20.473846 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:57:20.473855 | orchestrator | Saturday 12 July 2025 19:57:17 +0000 (0:00:09.578) 0:02:30.768 ********* 2025-07-12 19:57:20.473863 | orchestrator | =============================================================================== 2025-07-12 19:57:20.473872 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 43.49s 2025-07-12 19:57:20.473880 | orchestrator | common : Restart fluentd container ------------------------------------- 43.02s 2025-07-12 19:57:20.473893 | orchestrator | common : Restart cron container ----------------------------------------- 9.58s 2025-07-12 19:57:20.473902 | orchestrator | common : Copying over config.json files for services -------------------- 5.65s 2025-07-12 19:57:20.473911 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.80s 2025-07-12 19:57:20.473919 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.76s 2025-07-12 19:57:20.473928 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.32s 2025-07-12 19:57:20.473936 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.31s 2025-07-12 19:57:20.473945 | orchestrator | common : Check common containers ---------------------------------------- 3.14s 2025-07-12 19:57:20.473954 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.71s 2025-07-12 19:57:20.473962 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.69s 2025-07-12 19:57:20.473971 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.50s 2025-07-12 19:57:20.473979 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.35s 2025-07-12 19:57:20.473988 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.22s 2025-07-12 19:57:20.473996 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.96s 2025-07-12 19:57:20.474005 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.80s 2025-07-12 19:57:20.474014 | orchestrator | common : Creating log volume -------------------------------------------- 1.71s 2025-07-12 19:57:20.474050 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.39s 2025-07-12 19:57:20.474059 | orchestrator | common : include_tasks -------------------------------------------------- 1.34s 2025-07-12 19:57:20.474097 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.28s 2025-07-12 19:57:20.474106 | orchestrator | 2025-07-12 19:57:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:23.503689 | orchestrator | 2025-07-12 19:57:23 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:23.503889 | orchestrator | 2025-07-12 19:57:23 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:23.503938 | orchestrator | 2025-07-12 19:57:23 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:23.504500 | orchestrator | 2025-07-12 19:57:23 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:23.505931 | orchestrator | 2025-07-12 19:57:23 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:23.507615 | orchestrator | 2025-07-12 19:57:23 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:23.507647 | orchestrator | 2025-07-12 19:57:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:26.536418 | orchestrator | 2025-07-12 19:57:26 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:26.536980 | orchestrator | 2025-07-12 19:57:26 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:26.537455 | orchestrator | 2025-07-12 19:57:26 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:26.539149 | orchestrator | 2025-07-12 19:57:26 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:26.540220 | orchestrator | 2025-07-12 19:57:26 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:26.541687 | orchestrator | 2025-07-12 19:57:26 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:26.541730 | orchestrator | 2025-07-12 19:57:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:29.564979 | orchestrator | 2025-07-12 19:57:29 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:29.568861 | orchestrator | 2025-07-12 19:57:29 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:29.570893 | orchestrator | 2025-07-12 19:57:29 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:29.571914 | orchestrator | 2025-07-12 19:57:29 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:29.577085 | orchestrator | 2025-07-12 19:57:29 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:29.577650 | orchestrator | 2025-07-12 19:57:29 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:29.577687 | orchestrator | 2025-07-12 19:57:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:32.609879 | orchestrator | 2025-07-12 19:57:32 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:32.610112 | orchestrator | 2025-07-12 19:57:32 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:32.610133 | orchestrator | 2025-07-12 19:57:32 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:32.610159 | orchestrator | 2025-07-12 19:57:32 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:32.610613 | orchestrator | 2025-07-12 19:57:32 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:32.611157 | orchestrator | 2025-07-12 19:57:32 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:32.611178 | orchestrator | 2025-07-12 19:57:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:35.645236 | orchestrator | 2025-07-12 19:57:35 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:35.645329 | orchestrator | 2025-07-12 19:57:35 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:35.647338 | orchestrator | 2025-07-12 19:57:35 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:35.647710 | orchestrator | 2025-07-12 19:57:35 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:35.649366 | orchestrator | 2025-07-12 19:57:35 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:35.650914 | orchestrator | 2025-07-12 19:57:35 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:35.650950 | orchestrator | 2025-07-12 19:57:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:38.678676 | orchestrator | 2025-07-12 19:57:38 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:38.680073 | orchestrator | 2025-07-12 19:57:38 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:38.680720 | orchestrator | 2025-07-12 19:57:38 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:38.681481 | orchestrator | 2025-07-12 19:57:38 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:38.683783 | orchestrator | 2025-07-12 19:57:38 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:38.684214 | orchestrator | 2025-07-12 19:57:38 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:38.684495 | orchestrator | 2025-07-12 19:57:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:41.709393 | orchestrator | 2025-07-12 19:57:41 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:41.709898 | orchestrator | 2025-07-12 19:57:41 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:41.710860 | orchestrator | 2025-07-12 19:57:41 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:41.712194 | orchestrator | 2025-07-12 19:57:41 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:41.713816 | orchestrator | 2025-07-12 19:57:41 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state STARTED 2025-07-12 19:57:41.714746 | orchestrator | 2025-07-12 19:57:41 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:41.715076 | orchestrator | 2025-07-12 19:57:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:44.764841 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:44.764972 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:44.766353 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:44.769599 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:44.769988 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task 5c86f323-1b95-4933-9284-27acb8dbfdb3 is in state SUCCESS 2025-07-12 19:57:44.770598 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:57:44.771253 | orchestrator | 2025-07-12 19:57:44 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:44.771276 | orchestrator | 2025-07-12 19:57:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:47.795590 | orchestrator | 2025-07-12 19:57:47 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:47.796633 | orchestrator | 2025-07-12 19:57:47 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:47.796684 | orchestrator | 2025-07-12 19:57:47 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:47.797467 | orchestrator | 2025-07-12 19:57:47 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:47.798315 | orchestrator | 2025-07-12 19:57:47 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:57:47.799015 | orchestrator | 2025-07-12 19:57:47 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state STARTED 2025-07-12 19:57:47.799066 | orchestrator | 2025-07-12 19:57:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:50.823050 | orchestrator | 2025-07-12 19:57:50 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:50.825914 | orchestrator | 2025-07-12 19:57:50 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:50.828421 | orchestrator | 2025-07-12 19:57:50 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:50.831657 | orchestrator | 2025-07-12 19:57:50 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:50.834683 | orchestrator | 2025-07-12 19:57:50 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:57:50.836053 | orchestrator | 2025-07-12 19:57:50 | INFO  | Task 23cfb6eb-da05-4656-8dc5-3ae4dd35f4a9 is in state SUCCESS 2025-07-12 19:57:50.837162 | orchestrator | 2025-07-12 19:57:50.837184 | orchestrator | 2025-07-12 19:57:50.837194 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 19:57:50.837203 | orchestrator | 2025-07-12 19:57:50.837211 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 19:57:50.837220 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.571) 0:00:00.571 ********* 2025-07-12 19:57:50.837228 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:57:50.837237 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:57:50.837246 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:57:50.837254 | orchestrator | 2025-07-12 19:57:50.837263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 19:57:50.837271 | orchestrator | Saturday 12 July 2025 19:57:26 +0000 (0:00:00.778) 0:00:01.350 ********* 2025-07-12 19:57:50.837279 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-12 19:57:50.837286 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-12 19:57:50.837294 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-12 19:57:50.837301 | orchestrator | 2025-07-12 19:57:50.837309 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-12 19:57:50.837316 | orchestrator | 2025-07-12 19:57:50.837324 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-12 19:57:50.837331 | orchestrator | Saturday 12 July 2025 19:57:27 +0000 (0:00:00.712) 0:00:02.062 ********* 2025-07-12 19:57:50.837339 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 19:57:50.837347 | orchestrator | 2025-07-12 19:57:50.837354 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-12 19:57:50.837362 | orchestrator | Saturday 12 July 2025 19:57:28 +0000 (0:00:00.846) 0:00:02.909 ********* 2025-07-12 19:57:50.837370 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 19:57:50.837378 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 19:57:50.837391 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 19:57:50.837398 | orchestrator | 2025-07-12 19:57:50.837406 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-12 19:57:50.837430 | orchestrator | Saturday 12 July 2025 19:57:29 +0000 (0:00:00.997) 0:00:03.906 ********* 2025-07-12 19:57:50.837438 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 19:57:50.837450 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 19:57:50.837462 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 19:57:50.837475 | orchestrator | 2025-07-12 19:57:50.837484 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-12 19:57:50.837492 | orchestrator | Saturday 12 July 2025 19:57:31 +0000 (0:00:02.383) 0:00:06.290 ********* 2025-07-12 19:57:50.837500 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:50.837507 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:50.837514 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:50.837521 | orchestrator | 2025-07-12 19:57:50.837529 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-12 19:57:50.837536 | orchestrator | Saturday 12 July 2025 19:57:34 +0000 (0:00:02.381) 0:00:08.671 ********* 2025-07-12 19:57:50.837543 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:50.837550 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:50.837557 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:50.837564 | orchestrator | 2025-07-12 19:57:50.837571 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:57:50.837579 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:50.837597 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:50.837605 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:50.837612 | orchestrator | 2025-07-12 19:57:50.837619 | orchestrator | 2025-07-12 19:57:50.837626 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:57:50.837633 | orchestrator | Saturday 12 July 2025 19:57:41 +0000 (0:00:07.272) 0:00:15.944 ********* 2025-07-12 19:57:50.837641 | orchestrator | =============================================================================== 2025-07-12 19:57:50.837648 | orchestrator | memcached : Restart memcached container --------------------------------- 7.27s 2025-07-12 19:57:50.837655 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.38s 2025-07-12 19:57:50.837662 | orchestrator | memcached : Check memcached container ----------------------------------- 2.38s 2025-07-12 19:57:50.837669 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.00s 2025-07-12 19:57:50.837676 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.85s 2025-07-12 19:57:50.837683 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2025-07-12 19:57:50.837690 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-07-12 19:57:50.837697 | orchestrator | 2025-07-12 19:57:50.837705 | orchestrator | 2025-07-12 19:57:50.837712 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 19:57:50.837719 | orchestrator | 2025-07-12 19:57:50.837726 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 19:57:50.837734 | orchestrator | Saturday 12 July 2025 19:57:23 +0000 (0:00:00.825) 0:00:00.825 ********* 2025-07-12 19:57:50.837741 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:57:50.837748 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:57:50.837755 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:57:50.837762 | orchestrator | 2025-07-12 19:57:50.837770 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 19:57:50.837785 | orchestrator | Saturday 12 July 2025 19:57:24 +0000 (0:00:00.668) 0:00:01.493 ********* 2025-07-12 19:57:50.837793 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-12 19:57:50.837806 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-12 19:57:50.837813 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-12 19:57:50.837821 | orchestrator | 2025-07-12 19:57:50.837828 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-12 19:57:50.837835 | orchestrator | 2025-07-12 19:57:50.837842 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-12 19:57:50.837849 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.869) 0:00:02.363 ********* 2025-07-12 19:57:50.837883 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 19:57:50.837892 | orchestrator | 2025-07-12 19:57:50.837899 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-12 19:57:50.837906 | orchestrator | Saturday 12 July 2025 19:57:26 +0000 (0:00:00.839) 0:00:03.202 ********* 2025-07-12 19:57:50.837916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.837928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.837936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.837948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.837957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.837975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.837983 | orchestrator | 2025-07-12 19:57:50.837991 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-12 19:57:50.837998 | orchestrator | Saturday 12 July 2025 19:57:27 +0000 (0:00:01.640) 0:00:04.842 ********* 2025-07-12 19:57:50.838006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838101 | orchestrator | 2025-07-12 19:57:50.838109 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-12 19:57:50.838116 | orchestrator | Saturday 12 July 2025 19:57:31 +0000 (0:00:03.379) 0:00:08.222 ********* 2025-07-12 19:57:50.838124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838178 | orchestrator | 2025-07-12 19:57:50.838189 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-12 19:57:50.838197 | orchestrator | Saturday 12 July 2025 19:57:34 +0000 (0:00:03.513) 0:00:11.735 ********* 2025-07-12 19:57:50.838205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 19:57:50.838260 | orchestrator | 2025-07-12 19:57:50.838267 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 19:57:50.838275 | orchestrator | Saturday 12 July 2025 19:57:36 +0000 (0:00:01.602) 0:00:13.338 ********* 2025-07-12 19:57:50.838282 | orchestrator | 2025-07-12 19:57:50.838289 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 19:57:50.838300 | orchestrator | Saturday 12 July 2025 19:57:36 +0000 (0:00:00.128) 0:00:13.466 ********* 2025-07-12 19:57:50.838308 | orchestrator | 2025-07-12 19:57:50.838315 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 19:57:50.838322 | orchestrator | Saturday 12 July 2025 19:57:36 +0000 (0:00:00.052) 0:00:13.519 ********* 2025-07-12 19:57:50.838330 | orchestrator | 2025-07-12 19:57:50.838337 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-12 19:57:50.838344 | orchestrator | Saturday 12 July 2025 19:57:36 +0000 (0:00:00.054) 0:00:13.574 ********* 2025-07-12 19:57:50.838352 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:50.838359 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:50.838366 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:50.838373 | orchestrator | 2025-07-12 19:57:50.838381 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-12 19:57:50.838388 | orchestrator | Saturday 12 July 2025 19:57:40 +0000 (0:00:03.500) 0:00:17.074 ********* 2025-07-12 19:57:50.838395 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:50.838403 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:50.838410 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:50.838417 | orchestrator | 2025-07-12 19:57:50.838424 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:57:50.838432 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:50.838440 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:50.838447 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:50.838454 | orchestrator | 2025-07-12 19:57:50.838462 | orchestrator | 2025-07-12 19:57:50.838469 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:57:50.838476 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:09.267) 0:00:26.342 ********* 2025-07-12 19:57:50.838484 | orchestrator | =============================================================================== 2025-07-12 19:57:50.838491 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.27s 2025-07-12 19:57:50.838498 | orchestrator | redis : Copying over redis config files --------------------------------- 3.51s 2025-07-12 19:57:50.838506 | orchestrator | redis : Restart redis container ----------------------------------------- 3.50s 2025-07-12 19:57:50.838513 | orchestrator | redis : Copying over default config.json files -------------------------- 3.38s 2025-07-12 19:57:50.838520 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.64s 2025-07-12 19:57:50.838532 | orchestrator | redis : Check redis containers ------------------------------------------ 1.60s 2025-07-12 19:57:50.838539 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-07-12 19:57:50.838547 | orchestrator | redis : include_tasks --------------------------------------------------- 0.84s 2025-07-12 19:57:50.838554 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2025-07-12 19:57:50.838561 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2025-07-12 19:57:50.838569 | orchestrator | 2025-07-12 19:57:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:53.871689 | orchestrator | 2025-07-12 19:57:53 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:53.871986 | orchestrator | 2025-07-12 19:57:53 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:53.873364 | orchestrator | 2025-07-12 19:57:53 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:53.874141 | orchestrator | 2025-07-12 19:57:53 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:53.875107 | orchestrator | 2025-07-12 19:57:53 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:57:53.875137 | orchestrator | 2025-07-12 19:57:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:56.919301 | orchestrator | 2025-07-12 19:57:56 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:56.919973 | orchestrator | 2025-07-12 19:57:56 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:56.921642 | orchestrator | 2025-07-12 19:57:56 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:56.923273 | orchestrator | 2025-07-12 19:57:56 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:56.924814 | orchestrator | 2025-07-12 19:57:56 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:57:56.924847 | orchestrator | 2025-07-12 19:57:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:57:59.973813 | orchestrator | 2025-07-12 19:57:59 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:57:59.976097 | orchestrator | 2025-07-12 19:57:59 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:57:59.977165 | orchestrator | 2025-07-12 19:57:59 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:57:59.978378 | orchestrator | 2025-07-12 19:57:59 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:57:59.981095 | orchestrator | 2025-07-12 19:57:59 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:57:59.981168 | orchestrator | 2025-07-12 19:57:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:03.038736 | orchestrator | 2025-07-12 19:58:03 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:03.039108 | orchestrator | 2025-07-12 19:58:03 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:03.044341 | orchestrator | 2025-07-12 19:58:03 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:03.044851 | orchestrator | 2025-07-12 19:58:03 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:58:03.048651 | orchestrator | 2025-07-12 19:58:03 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:03.048717 | orchestrator | 2025-07-12 19:58:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:06.084955 | orchestrator | 2025-07-12 19:58:06 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:06.085347 | orchestrator | 2025-07-12 19:58:06 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:06.087665 | orchestrator | 2025-07-12 19:58:06 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:06.088119 | orchestrator | 2025-07-12 19:58:06 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:58:06.088941 | orchestrator | 2025-07-12 19:58:06 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:06.088966 | orchestrator | 2025-07-12 19:58:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:09.133569 | orchestrator | 2025-07-12 19:58:09 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:09.135633 | orchestrator | 2025-07-12 19:58:09 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:09.135690 | orchestrator | 2025-07-12 19:58:09 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:09.135715 | orchestrator | 2025-07-12 19:58:09 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:58:09.135736 | orchestrator | 2025-07-12 19:58:09 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:09.135748 | orchestrator | 2025-07-12 19:58:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:12.187178 | orchestrator | 2025-07-12 19:58:12 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:12.187435 | orchestrator | 2025-07-12 19:58:12 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:12.188967 | orchestrator | 2025-07-12 19:58:12 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:12.189462 | orchestrator | 2025-07-12 19:58:12 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:58:12.190118 | orchestrator | 2025-07-12 19:58:12 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:12.190161 | orchestrator | 2025-07-12 19:58:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:15.226463 | orchestrator | 2025-07-12 19:58:15 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:15.228358 | orchestrator | 2025-07-12 19:58:15 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:15.228916 | orchestrator | 2025-07-12 19:58:15 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:15.229911 | orchestrator | 2025-07-12 19:58:15 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:58:15.232693 | orchestrator | 2025-07-12 19:58:15 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:15.232719 | orchestrator | 2025-07-12 19:58:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:18.273379 | orchestrator | 2025-07-12 19:58:18 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:18.273496 | orchestrator | 2025-07-12 19:58:18 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:18.273936 | orchestrator | 2025-07-12 19:58:18 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:18.274446 | orchestrator | 2025-07-12 19:58:18 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state STARTED 2025-07-12 19:58:18.274977 | orchestrator | 2025-07-12 19:58:18 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:18.275065 | orchestrator | 2025-07-12 19:58:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:21.305239 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:21.305528 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:21.306147 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task abbc9fe0-7de0-4cee-904d-ab276286debc is in state STARTED 2025-07-12 19:58:21.306793 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:21.310669 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task 8dff88d3-a26f-462a-91e7-e7d4789bd40a is in state SUCCESS 2025-07-12 19:58:21.312315 | orchestrator | 2025-07-12 19:58:21.312343 | orchestrator | 2025-07-12 19:58:21.312351 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-12 19:58:21.312359 | orchestrator | 2025-07-12 19:58:21.312367 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-12 19:58:21.312374 | orchestrator | Saturday 12 July 2025 19:54:47 +0000 (0:00:00.170) 0:00:00.170 ********* 2025-07-12 19:58:21.312381 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.312389 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.312396 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.312404 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.312411 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.312419 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.312426 | orchestrator | 2025-07-12 19:58:21.312433 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-12 19:58:21.312441 | orchestrator | Saturday 12 July 2025 19:54:48 +0000 (0:00:00.614) 0:00:00.785 ********* 2025-07-12 19:58:21.312448 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.312456 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.312463 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.312471 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.312478 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.312485 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.312492 | orchestrator | 2025-07-12 19:58:21.312499 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-12 19:58:21.312506 | orchestrator | Saturday 12 July 2025 19:54:49 +0000 (0:00:00.610) 0:00:01.395 ********* 2025-07-12 19:58:21.312513 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.312520 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.312527 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.312535 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.312542 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.312549 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.312556 | orchestrator | 2025-07-12 19:58:21.312563 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-12 19:58:21.312570 | orchestrator | Saturday 12 July 2025 19:54:49 +0000 (0:00:00.809) 0:00:02.204 ********* 2025-07-12 19:58:21.312578 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.312584 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.312597 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.312604 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.312610 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.312618 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.312625 | orchestrator | 2025-07-12 19:58:21.312655 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-12 19:58:21.312664 | orchestrator | Saturday 12 July 2025 19:54:51 +0000 (0:00:01.928) 0:00:04.133 ********* 2025-07-12 19:58:21.312684 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.312692 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.312699 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.312706 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.312713 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.312720 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.312727 | orchestrator | 2025-07-12 19:58:21.312735 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-12 19:58:21.312742 | orchestrator | Saturday 12 July 2025 19:54:53 +0000 (0:00:01.256) 0:00:05.390 ********* 2025-07-12 19:58:21.312749 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.312757 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.312764 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.312771 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.312778 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.312785 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.312792 | orchestrator | 2025-07-12 19:58:21.312799 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-12 19:58:21.312806 | orchestrator | Saturday 12 July 2025 19:54:54 +0000 (0:00:01.123) 0:00:06.514 ********* 2025-07-12 19:58:21.312814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.312821 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.312829 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.312836 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.312856 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.312864 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.312871 | orchestrator | 2025-07-12 19:58:21.312878 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-12 19:58:21.312886 | orchestrator | Saturday 12 July 2025 19:54:55 +0000 (0:00:00.794) 0:00:07.308 ********* 2025-07-12 19:58:21.312905 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.312913 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.312920 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.312927 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.312934 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.312941 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.312949 | orchestrator | 2025-07-12 19:58:21.312956 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-12 19:58:21.312964 | orchestrator | Saturday 12 July 2025 19:54:55 +0000 (0:00:00.703) 0:00:08.011 ********* 2025-07-12 19:58:21.312971 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 19:58:21.312978 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 19:58:21.312985 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.312992 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 19:58:21.312999 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 19:58:21.313006 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.313013 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 19:58:21.313020 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 19:58:21.313027 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.313035 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 19:58:21.313050 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 19:58:21.313057 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.313064 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 19:58:21.313071 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 19:58:21.313077 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.313092 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 19:58:21.313100 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 19:58:21.313116 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.313124 | orchestrator | 2025-07-12 19:58:21.313131 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-12 19:58:21.313138 | orchestrator | Saturday 12 July 2025 19:54:56 +0000 (0:00:00.924) 0:00:08.935 ********* 2025-07-12 19:58:21.313145 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.313152 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.313160 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.313167 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.313174 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.313182 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.313188 | orchestrator | 2025-07-12 19:58:21.313195 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-12 19:58:21.313235 | orchestrator | Saturday 12 July 2025 19:54:58 +0000 (0:00:01.318) 0:00:10.254 ********* 2025-07-12 19:58:21.313243 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.313251 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.313258 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.313265 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313272 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.313279 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.313286 | orchestrator | 2025-07-12 19:58:21.313294 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-12 19:58:21.313301 | orchestrator | Saturday 12 July 2025 19:54:58 +0000 (0:00:00.544) 0:00:10.799 ********* 2025-07-12 19:58:21.313308 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.313320 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.313327 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.313333 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.313339 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.313345 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.313351 | orchestrator | 2025-07-12 19:58:21.313357 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-12 19:58:21.313374 | orchestrator | Saturday 12 July 2025 19:55:03 +0000 (0:00:05.324) 0:00:16.123 ********* 2025-07-12 19:58:21.313381 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.313388 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.313395 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.313401 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.313408 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.313415 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.313422 | orchestrator | 2025-07-12 19:58:21.313429 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-12 19:58:21.313435 | orchestrator | Saturday 12 July 2025 19:55:04 +0000 (0:00:00.794) 0:00:16.917 ********* 2025-07-12 19:58:21.313442 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.313448 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.313455 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.313462 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.313469 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.313476 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.313483 | orchestrator | 2025-07-12 19:58:21.313490 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-12 19:58:21.313498 | orchestrator | Saturday 12 July 2025 19:55:06 +0000 (0:00:01.862) 0:00:18.779 ********* 2025-07-12 19:58:21.313505 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.313512 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.313519 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.313526 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313540 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.313548 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.313554 | orchestrator | 2025-07-12 19:58:21.313562 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-12 19:58:21.313569 | orchestrator | Saturday 12 July 2025 19:55:07 +0000 (0:00:00.939) 0:00:19.719 ********* 2025-07-12 19:58:21.313576 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-12 19:58:21.313583 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-12 19:58:21.313590 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-12 19:58:21.313597 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-12 19:58:21.313604 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-12 19:58:21.313611 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-12 19:58:21.313618 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-12 19:58:21.313625 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-12 19:58:21.313632 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-12 19:58:21.313639 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-12 19:58:21.313646 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-12 19:58:21.313653 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-12 19:58:21.313660 | orchestrator | 2025-07-12 19:58:21.313667 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-12 19:58:21.313673 | orchestrator | Saturday 12 July 2025 19:55:09 +0000 (0:00:02.342) 0:00:22.062 ********* 2025-07-12 19:58:21.313680 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.313687 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.313694 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.313701 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.313708 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.313714 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.313720 | orchestrator | 2025-07-12 19:58:21.313735 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-12 19:58:21.313742 | orchestrator | 2025-07-12 19:58:21.313749 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-12 19:58:21.313757 | orchestrator | Saturday 12 July 2025 19:55:12 +0000 (0:00:02.174) 0:00:24.237 ********* 2025-07-12 19:58:21.313763 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313770 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.313777 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.313784 | orchestrator | 2025-07-12 19:58:21.313791 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-12 19:58:21.313797 | orchestrator | Saturday 12 July 2025 19:55:13 +0000 (0:00:01.694) 0:00:25.932 ********* 2025-07-12 19:58:21.313804 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.313811 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.313818 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313825 | orchestrator | 2025-07-12 19:58:21.313832 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-12 19:58:21.313838 | orchestrator | Saturday 12 July 2025 19:55:14 +0000 (0:00:01.000) 0:00:26.932 ********* 2025-07-12 19:58:21.313845 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313852 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.313859 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.313866 | orchestrator | 2025-07-12 19:58:21.313873 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-12 19:58:21.313880 | orchestrator | Saturday 12 July 2025 19:55:15 +0000 (0:00:01.034) 0:00:27.967 ********* 2025-07-12 19:58:21.313886 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313904 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.313910 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.313916 | orchestrator | 2025-07-12 19:58:21.313922 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-12 19:58:21.313929 | orchestrator | Saturday 12 July 2025 19:55:16 +0000 (0:00:00.974) 0:00:28.941 ********* 2025-07-12 19:58:21.313942 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.313949 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.313956 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.313962 | orchestrator | 2025-07-12 19:58:21.313969 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-07-12 19:58:21.313979 | orchestrator | Saturday 12 July 2025 19:55:17 +0000 (0:00:00.468) 0:00:29.410 ********* 2025-07-12 19:58:21.313987 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.313993 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314000 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.314007 | orchestrator | 2025-07-12 19:58:21.314057 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-07-12 19:58:21.314069 | orchestrator | Saturday 12 July 2025 19:55:18 +0000 (0:00:00.862) 0:00:30.273 ********* 2025-07-12 19:58:21.314076 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.314083 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.314090 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314097 | orchestrator | 2025-07-12 19:58:21.314104 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-12 19:58:21.314111 | orchestrator | Saturday 12 July 2025 19:55:19 +0000 (0:00:01.758) 0:00:32.031 ********* 2025-07-12 19:58:21.314118 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 19:58:21.314126 | orchestrator | 2025-07-12 19:58:21.314133 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-12 19:58:21.314139 | orchestrator | Saturday 12 July 2025 19:55:20 +0000 (0:00:00.506) 0:00:32.538 ********* 2025-07-12 19:58:21.314146 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.314153 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.314160 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314167 | orchestrator | 2025-07-12 19:58:21.314174 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-12 19:58:21.314181 | orchestrator | Saturday 12 July 2025 19:55:22 +0000 (0:00:02.146) 0:00:34.685 ********* 2025-07-12 19:58:21.314189 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.314196 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.314203 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314210 | orchestrator | 2025-07-12 19:58:21.314217 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-12 19:58:21.314224 | orchestrator | Saturday 12 July 2025 19:55:23 +0000 (0:00:00.773) 0:00:35.458 ********* 2025-07-12 19:58:21.314231 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.314238 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.314245 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314252 | orchestrator | 2025-07-12 19:58:21.314259 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-12 19:58:21.314265 | orchestrator | Saturday 12 July 2025 19:55:24 +0000 (0:00:01.265) 0:00:36.724 ********* 2025-07-12 19:58:21.314273 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.314279 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.314285 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314291 | orchestrator | 2025-07-12 19:58:21.314298 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-12 19:58:21.314305 | orchestrator | Saturday 12 July 2025 19:55:25 +0000 (0:00:01.387) 0:00:38.111 ********* 2025-07-12 19:58:21.314312 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.314318 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.314325 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.314331 | orchestrator | 2025-07-12 19:58:21.314337 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-12 19:58:21.314344 | orchestrator | Saturday 12 July 2025 19:55:26 +0000 (0:00:00.497) 0:00:38.608 ********* 2025-07-12 19:58:21.314350 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.314363 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.314370 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.314377 | orchestrator | 2025-07-12 19:58:21.314383 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-12 19:58:21.314390 | orchestrator | Saturday 12 July 2025 19:55:27 +0000 (0:00:00.811) 0:00:39.420 ********* 2025-07-12 19:58:21.314396 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314403 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.314410 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.314417 | orchestrator | 2025-07-12 19:58:21.314432 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-12 19:58:21.314439 | orchestrator | Saturday 12 July 2025 19:55:28 +0000 (0:00:01.636) 0:00:41.056 ********* 2025-07-12 19:58:21.314446 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 19:58:21.314453 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 19:58:21.314459 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 19:58:21.314465 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 19:58:21.314472 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 19:58:21.314478 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 19:58:21.314485 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 19:58:21.314492 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 19:58:21.314499 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 19:58:21.314510 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 19:58:21.314517 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 19:58:21.314524 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 19:58:21.314531 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 19:58:21.314538 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314545 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.314561 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.314574 | orchestrator | 2025-07-12 19:58:21.314582 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-12 19:58:21.314589 | orchestrator | Saturday 12 July 2025 19:56:23 +0000 (0:00:54.891) 0:01:35.948 ********* 2025-07-12 19:58:21.314596 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.314603 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.314610 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.314617 | orchestrator | 2025-07-12 19:58:21.314624 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-12 19:58:21.314631 | orchestrator | Saturday 12 July 2025 19:56:24 +0000 (0:00:00.323) 0:01:36.272 ********* 2025-07-12 19:58:21.314638 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314645 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.314658 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.314665 | orchestrator | 2025-07-12 19:58:21.314672 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-12 19:58:21.314679 | orchestrator | Saturday 12 July 2025 19:56:25 +0000 (0:00:01.368) 0:01:37.640 ********* 2025-07-12 19:58:21.314686 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314693 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.314700 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.314706 | orchestrator | 2025-07-12 19:58:21.314713 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-12 19:58:21.314720 | orchestrator | Saturday 12 July 2025 19:56:26 +0000 (0:00:01.199) 0:01:38.840 ********* 2025-07-12 19:58:21.314726 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.314732 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.314739 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314746 | orchestrator | 2025-07-12 19:58:21.314753 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-12 19:58:21.314760 | orchestrator | Saturday 12 July 2025 19:56:50 +0000 (0:00:24.353) 0:02:03.193 ********* 2025-07-12 19:58:21.314767 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314774 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.314781 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.314788 | orchestrator | 2025-07-12 19:58:21.314795 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-12 19:58:21.314802 | orchestrator | Saturday 12 July 2025 19:56:51 +0000 (0:00:00.611) 0:02:03.804 ********* 2025-07-12 19:58:21.314809 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.314816 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314823 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.314831 | orchestrator | 2025-07-12 19:58:21.314838 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-12 19:58:21.314845 | orchestrator | Saturday 12 July 2025 19:56:52 +0000 (0:00:00.721) 0:02:04.526 ********* 2025-07-12 19:58:21.314853 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.314860 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.314867 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.314874 | orchestrator | 2025-07-12 19:58:21.314918 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-12 19:58:21.314935 | orchestrator | Saturday 12 July 2025 19:56:52 +0000 (0:00:00.644) 0:02:05.170 ********* 2025-07-12 19:58:21.314942 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314949 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.314956 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.314963 | orchestrator | 2025-07-12 19:58:21.314970 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-12 19:58:21.314976 | orchestrator | Saturday 12 July 2025 19:56:53 +0000 (0:00:00.666) 0:02:05.837 ********* 2025-07-12 19:58:21.314983 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.314990 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.314997 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.315003 | orchestrator | 2025-07-12 19:58:21.315010 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-12 19:58:21.315017 | orchestrator | Saturday 12 July 2025 19:56:53 +0000 (0:00:00.275) 0:02:06.112 ********* 2025-07-12 19:58:21.315024 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.315031 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.315038 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.315045 | orchestrator | 2025-07-12 19:58:21.315053 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-12 19:58:21.315060 | orchestrator | Saturday 12 July 2025 19:56:54 +0000 (0:00:00.795) 0:02:06.908 ********* 2025-07-12 19:58:21.315067 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.315074 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.315081 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.315088 | orchestrator | 2025-07-12 19:58:21.315101 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-12 19:58:21.315109 | orchestrator | Saturday 12 July 2025 19:56:55 +0000 (0:00:00.606) 0:02:07.515 ********* 2025-07-12 19:58:21.315116 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.315122 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.315129 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.315135 | orchestrator | 2025-07-12 19:58:21.315142 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-12 19:58:21.315150 | orchestrator | Saturday 12 July 2025 19:56:56 +0000 (0:00:00.785) 0:02:08.300 ********* 2025-07-12 19:58:21.315161 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:21.315168 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:21.315176 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:21.315183 | orchestrator | 2025-07-12 19:58:21.315190 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-12 19:58:21.315197 | orchestrator | Saturday 12 July 2025 19:56:56 +0000 (0:00:00.748) 0:02:09.049 ********* 2025-07-12 19:58:21.315212 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.315255 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.315262 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.315270 | orchestrator | 2025-07-12 19:58:21.315277 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-12 19:58:21.315285 | orchestrator | Saturday 12 July 2025 19:56:57 +0000 (0:00:00.442) 0:02:09.491 ********* 2025-07-12 19:58:21.315291 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.315297 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.315303 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.315310 | orchestrator | 2025-07-12 19:58:21.315317 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-12 19:58:21.315325 | orchestrator | Saturday 12 July 2025 19:56:57 +0000 (0:00:00.243) 0:02:09.735 ********* 2025-07-12 19:58:21.315331 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.315337 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.315343 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.315349 | orchestrator | 2025-07-12 19:58:21.315355 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-12 19:58:21.315361 | orchestrator | Saturday 12 July 2025 19:56:58 +0000 (0:00:00.596) 0:02:10.332 ********* 2025-07-12 19:58:21.315367 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.315375 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.315381 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.315388 | orchestrator | 2025-07-12 19:58:21.315395 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-12 19:58:21.315402 | orchestrator | Saturday 12 July 2025 19:56:58 +0000 (0:00:00.595) 0:02:10.927 ********* 2025-07-12 19:58:21.315408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 19:58:21.315414 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 19:58:21.315421 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 19:58:21.315427 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 19:58:21.315434 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 19:58:21.315440 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 19:58:21.315446 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 19:58:21.315453 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 19:58:21.315460 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-12 19:58:21.315473 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 19:58:21.315480 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 19:58:21.315486 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-12 19:58:21.315500 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 19:58:21.315506 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 19:58:21.315513 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 19:58:21.315520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 19:58:21.315526 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 19:58:21.315533 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 19:58:21.315540 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 19:58:21.315548 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 19:58:21.315554 | orchestrator | 2025-07-12 19:58:21.315561 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-12 19:58:21.315567 | orchestrator | 2025-07-12 19:58:21.315573 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-12 19:58:21.315579 | orchestrator | Saturday 12 July 2025 19:57:01 +0000 (0:00:03.175) 0:02:14.103 ********* 2025-07-12 19:58:21.315585 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.315591 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.315598 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.315604 | orchestrator | 2025-07-12 19:58:21.315610 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-12 19:58:21.315617 | orchestrator | Saturday 12 July 2025 19:57:02 +0000 (0:00:00.297) 0:02:14.400 ********* 2025-07-12 19:58:21.315623 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.315630 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.315637 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.315644 | orchestrator | 2025-07-12 19:58:21.315651 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-12 19:58:21.315663 | orchestrator | Saturday 12 July 2025 19:57:02 +0000 (0:00:00.625) 0:02:15.026 ********* 2025-07-12 19:58:21.315670 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.315677 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.315695 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.315701 | orchestrator | 2025-07-12 19:58:21.315707 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-12 19:58:21.315712 | orchestrator | Saturday 12 July 2025 19:57:03 +0000 (0:00:00.411) 0:02:15.437 ********* 2025-07-12 19:58:21.315718 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:21.315724 | orchestrator | 2025-07-12 19:58:21.315730 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-12 19:58:21.315737 | orchestrator | Saturday 12 July 2025 19:57:03 +0000 (0:00:00.465) 0:02:15.903 ********* 2025-07-12 19:58:21.315743 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.315749 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.315756 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.315762 | orchestrator | 2025-07-12 19:58:21.315767 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-12 19:58:21.315773 | orchestrator | Saturday 12 July 2025 19:57:03 +0000 (0:00:00.269) 0:02:16.172 ********* 2025-07-12 19:58:21.315779 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.315785 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.315801 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.315807 | orchestrator | 2025-07-12 19:58:21.315813 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-12 19:58:21.315820 | orchestrator | Saturday 12 July 2025 19:57:04 +0000 (0:00:00.400) 0:02:16.573 ********* 2025-07-12 19:58:21.315825 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.315831 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.315836 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.315842 | orchestrator | 2025-07-12 19:58:21.315848 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-07-12 19:58:21.315854 | orchestrator | Saturday 12 July 2025 19:57:04 +0000 (0:00:00.241) 0:02:16.815 ********* 2025-07-12 19:58:21.315860 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.315866 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.315872 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.315878 | orchestrator | 2025-07-12 19:58:21.315884 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-07-12 19:58:21.315902 | orchestrator | Saturday 12 July 2025 19:57:05 +0000 (0:00:00.709) 0:02:17.524 ********* 2025-07-12 19:58:21.315909 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.315915 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.315920 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.315926 | orchestrator | 2025-07-12 19:58:21.315932 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-12 19:58:21.315938 | orchestrator | Saturday 12 July 2025 19:57:06 +0000 (0:00:01.048) 0:02:18.572 ********* 2025-07-12 19:58:21.315944 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.315950 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.315956 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.315961 | orchestrator | 2025-07-12 19:58:21.315967 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-12 19:58:21.315974 | orchestrator | Saturday 12 July 2025 19:57:07 +0000 (0:00:01.382) 0:02:19.954 ********* 2025-07-12 19:58:21.315980 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:21.315986 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:21.315992 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:21.315998 | orchestrator | 2025-07-12 19:58:21.316004 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 19:58:21.316010 | orchestrator | 2025-07-12 19:58:21.316016 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 19:58:21.316023 | orchestrator | Saturday 12 July 2025 19:57:19 +0000 (0:00:11.758) 0:02:31.713 ********* 2025-07-12 19:58:21.316030 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.316036 | orchestrator | 2025-07-12 19:58:21.316049 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 19:58:21.316056 | orchestrator | Saturday 12 July 2025 19:57:20 +0000 (0:00:00.747) 0:02:32.461 ********* 2025-07-12 19:58:21.316062 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316069 | orchestrator | 2025-07-12 19:58:21.316076 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 19:58:21.316081 | orchestrator | Saturday 12 July 2025 19:57:20 +0000 (0:00:00.398) 0:02:32.859 ********* 2025-07-12 19:58:21.316088 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 19:58:21.316094 | orchestrator | 2025-07-12 19:58:21.316101 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 19:58:21.316107 | orchestrator | Saturday 12 July 2025 19:57:21 +0000 (0:00:00.901) 0:02:33.761 ********* 2025-07-12 19:58:21.316113 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316119 | orchestrator | 2025-07-12 19:58:21.316125 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 19:58:21.316132 | orchestrator | Saturday 12 July 2025 19:57:22 +0000 (0:00:00.647) 0:02:34.409 ********* 2025-07-12 19:58:21.316138 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316145 | orchestrator | 2025-07-12 19:58:21.316152 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 19:58:21.316162 | orchestrator | Saturday 12 July 2025 19:57:22 +0000 (0:00:00.437) 0:02:34.846 ********* 2025-07-12 19:58:21.316168 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 19:58:21.316174 | orchestrator | 2025-07-12 19:58:21.316180 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 19:58:21.316185 | orchestrator | Saturday 12 July 2025 19:57:24 +0000 (0:00:01.566) 0:02:36.412 ********* 2025-07-12 19:58:21.316191 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 19:58:21.316197 | orchestrator | 2025-07-12 19:58:21.316203 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 19:58:21.316209 | orchestrator | Saturday 12 July 2025 19:57:24 +0000 (0:00:00.681) 0:02:37.094 ********* 2025-07-12 19:58:21.316216 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316222 | orchestrator | 2025-07-12 19:58:21.316232 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 19:58:21.316239 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.348) 0:02:37.443 ********* 2025-07-12 19:58:21.316245 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316250 | orchestrator | 2025-07-12 19:58:21.316256 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-12 19:58:21.316262 | orchestrator | 2025-07-12 19:58:21.316268 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-12 19:58:21.316274 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.370) 0:02:37.813 ********* 2025-07-12 19:58:21.316281 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.316287 | orchestrator | 2025-07-12 19:58:21.316293 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-12 19:58:21.316299 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.099) 0:02:37.912 ********* 2025-07-12 19:58:21.316303 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:58:21.316306 | orchestrator | 2025-07-12 19:58:21.316310 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-12 19:58:21.316314 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.314) 0:02:38.227 ********* 2025-07-12 19:58:21.316317 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.316321 | orchestrator | 2025-07-12 19:58:21.316325 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-12 19:58:21.316331 | orchestrator | Saturday 12 July 2025 19:57:26 +0000 (0:00:00.751) 0:02:38.978 ********* 2025-07-12 19:58:21.316337 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.316343 | orchestrator | 2025-07-12 19:58:21.316349 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-12 19:58:21.316355 | orchestrator | Saturday 12 July 2025 19:57:28 +0000 (0:00:01.294) 0:02:40.273 ********* 2025-07-12 19:58:21.316361 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316367 | orchestrator | 2025-07-12 19:58:21.316373 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-12 19:58:21.316379 | orchestrator | Saturday 12 July 2025 19:57:28 +0000 (0:00:00.701) 0:02:40.974 ********* 2025-07-12 19:58:21.316385 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.316392 | orchestrator | 2025-07-12 19:58:21.316399 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-12 19:58:21.316405 | orchestrator | Saturday 12 July 2025 19:57:29 +0000 (0:00:00.416) 0:02:41.391 ********* 2025-07-12 19:58:21.316411 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316415 | orchestrator | 2025-07-12 19:58:21.316418 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-12 19:58:21.316422 | orchestrator | Saturday 12 July 2025 19:57:34 +0000 (0:00:05.218) 0:02:46.609 ********* 2025-07-12 19:58:21.316426 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.316430 | orchestrator | 2025-07-12 19:58:21.316433 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-12 19:58:21.316441 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:10.708) 0:02:57.317 ********* 2025-07-12 19:58:21.316445 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.316449 | orchestrator | 2025-07-12 19:58:21.316453 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-12 19:58:21.316456 | orchestrator | 2025-07-12 19:58:21.316460 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-12 19:58:21.316464 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:00.545) 0:02:57.863 ********* 2025-07-12 19:58:21.316468 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.316472 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.316475 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.316479 | orchestrator | 2025-07-12 19:58:21.316483 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-12 19:58:21.316487 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.429) 0:02:58.292 ********* 2025-07-12 19:58:21.316495 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316499 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.316503 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.316507 | orchestrator | 2025-07-12 19:58:21.316511 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-12 19:58:21.316514 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.302) 0:02:58.595 ********* 2025-07-12 19:58:21.316518 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 19:58:21.316522 | orchestrator | 2025-07-12 19:58:21.316526 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-12 19:58:21.316530 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.501) 0:02:59.097 ********* 2025-07-12 19:58:21.316533 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316537 | orchestrator | 2025-07-12 19:58:21.316541 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-07-12 19:58:21.316545 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.430) 0:02:59.527 ********* 2025-07-12 19:58:21.316549 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316552 | orchestrator | 2025-07-12 19:58:21.316556 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-07-12 19:58:21.316560 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.153) 0:02:59.680 ********* 2025-07-12 19:58:21.316564 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316568 | orchestrator | 2025-07-12 19:58:21.316571 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-07-12 19:58:21.316575 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.183) 0:02:59.864 ********* 2025-07-12 19:58:21.316579 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316583 | orchestrator | 2025-07-12 19:58:21.316587 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-07-12 19:58:21.316591 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.155) 0:03:00.020 ********* 2025-07-12 19:58:21.316594 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316598 | orchestrator | 2025-07-12 19:58:21.316602 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-07-12 19:58:21.316608 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.186) 0:03:00.206 ********* 2025-07-12 19:58:21.316612 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316616 | orchestrator | 2025-07-12 19:58:21.316620 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-07-12 19:58:21.316623 | orchestrator | Saturday 12 July 2025 19:57:48 +0000 (0:00:00.182) 0:03:00.389 ********* 2025-07-12 19:58:21.316627 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316631 | orchestrator | 2025-07-12 19:58:21.316635 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-07-12 19:58:21.316638 | orchestrator | Saturday 12 July 2025 19:57:48 +0000 (0:00:00.179) 0:03:00.569 ********* 2025-07-12 19:58:21.316645 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316649 | orchestrator | 2025-07-12 19:58:21.316652 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-07-12 19:58:21.316656 | orchestrator | Saturday 12 July 2025 19:57:48 +0000 (0:00:00.220) 0:03:00.790 ********* 2025-07-12 19:58:21.316660 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316664 | orchestrator | 2025-07-12 19:58:21.316667 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-07-12 19:58:21.316671 | orchestrator | Saturday 12 July 2025 19:57:48 +0000 (0:00:00.188) 0:03:00.978 ********* 2025-07-12 19:58:21.316675 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-07-12 19:58:21.316679 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-07-12 19:58:21.316683 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316686 | orchestrator | 2025-07-12 19:58:21.316698 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-07-12 19:58:21.316702 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:00.348) 0:03:01.326 ********* 2025-07-12 19:58:21.316706 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316709 | orchestrator | 2025-07-12 19:58:21.316713 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-07-12 19:58:21.316717 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:00.171) 0:03:01.497 ********* 2025-07-12 19:58:21.316721 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316725 | orchestrator | 2025-07-12 19:58:21.316728 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-07-12 19:58:21.316732 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:00.180) 0:03:01.678 ********* 2025-07-12 19:58:21.316736 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316740 | orchestrator | 2025-07-12 19:58:21.316744 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-12 19:58:21.316747 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:00.450) 0:03:02.129 ********* 2025-07-12 19:58:21.316751 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316755 | orchestrator | 2025-07-12 19:58:21.316759 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-12 19:58:21.316763 | orchestrator | Saturday 12 July 2025 19:57:50 +0000 (0:00:00.180) 0:03:02.309 ********* 2025-07-12 19:58:21.316766 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316770 | orchestrator | 2025-07-12 19:58:21.316774 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-12 19:58:21.316778 | orchestrator | Saturday 12 July 2025 19:57:50 +0000 (0:00:00.175) 0:03:02.485 ********* 2025-07-12 19:58:21.316782 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316785 | orchestrator | 2025-07-12 19:58:21.316789 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-12 19:58:21.316793 | orchestrator | Saturday 12 July 2025 19:57:50 +0000 (0:00:00.179) 0:03:02.664 ********* 2025-07-12 19:58:21.316797 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316800 | orchestrator | 2025-07-12 19:58:21.316804 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-12 19:58:21.316808 | orchestrator | Saturday 12 July 2025 19:57:50 +0000 (0:00:00.197) 0:03:02.862 ********* 2025-07-12 19:58:21.316814 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316818 | orchestrator | 2025-07-12 19:58:21.316822 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-12 19:58:21.316826 | orchestrator | Saturday 12 July 2025 19:57:50 +0000 (0:00:00.188) 0:03:03.051 ********* 2025-07-12 19:58:21.316830 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316834 | orchestrator | 2025-07-12 19:58:21.316837 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-12 19:58:21.316841 | orchestrator | Saturday 12 July 2025 19:57:51 +0000 (0:00:00.182) 0:03:03.234 ********* 2025-07-12 19:58:21.316845 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316849 | orchestrator | 2025-07-12 19:58:21.316853 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-12 19:58:21.316862 | orchestrator | Saturday 12 July 2025 19:57:51 +0000 (0:00:00.182) 0:03:03.416 ********* 2025-07-12 19:58:21.316866 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316869 | orchestrator | 2025-07-12 19:58:21.316873 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-12 19:58:21.316877 | orchestrator | Saturday 12 July 2025 19:57:51 +0000 (0:00:00.214) 0:03:03.631 ********* 2025-07-12 19:58:21.316881 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-07-12 19:58:21.316885 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-07-12 19:58:21.316899 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-07-12 19:58:21.316906 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-07-12 19:58:21.316910 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316914 | orchestrator | 2025-07-12 19:58:21.316918 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-12 19:58:21.316922 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.620) 0:03:04.251 ********* 2025-07-12 19:58:21.316926 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316929 | orchestrator | 2025-07-12 19:58:21.316933 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-12 19:58:21.316939 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.202) 0:03:04.453 ********* 2025-07-12 19:58:21.316943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316947 | orchestrator | 2025-07-12 19:58:21.316951 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-12 19:58:21.316955 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.212) 0:03:04.666 ********* 2025-07-12 19:58:21.316958 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316962 | orchestrator | 2025-07-12 19:58:21.316966 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-12 19:58:21.316970 | orchestrator | Saturday 12 July 2025 19:57:53 +0000 (0:00:00.639) 0:03:05.305 ********* 2025-07-12 19:58:21.316974 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.316977 | orchestrator | 2025-07-12 19:58:21.316981 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-12 19:58:21.316985 | orchestrator | Saturday 12 July 2025 19:57:53 +0000 (0:00:00.289) 0:03:05.595 ********* 2025-07-12 19:58:21.316989 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-07-12 19:58:21.316993 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-07-12 19:58:21.316997 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.317000 | orchestrator | 2025-07-12 19:58:21.317004 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-12 19:58:21.317008 | orchestrator | Saturday 12 July 2025 19:57:53 +0000 (0:00:00.365) 0:03:05.960 ********* 2025-07-12 19:58:21.317012 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.317016 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.317019 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.317023 | orchestrator | 2025-07-12 19:58:21.317027 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-12 19:58:21.317031 | orchestrator | Saturday 12 July 2025 19:57:54 +0000 (0:00:00.421) 0:03:06.381 ********* 2025-07-12 19:58:21.317035 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.317038 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.317042 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.317046 | orchestrator | 2025-07-12 19:58:21.317050 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-12 19:58:21.317054 | orchestrator | 2025-07-12 19:58:21.317058 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-12 19:58:21.317061 | orchestrator | Saturday 12 July 2025 19:57:55 +0000 (0:00:00.984) 0:03:07.366 ********* 2025-07-12 19:58:21.317068 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:21.317072 | orchestrator | 2025-07-12 19:58:21.317076 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-12 19:58:21.317080 | orchestrator | Saturday 12 July 2025 19:57:55 +0000 (0:00:00.314) 0:03:07.681 ********* 2025-07-12 19:58:21.317083 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:58:21.317087 | orchestrator | 2025-07-12 19:58:21.317091 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-12 19:58:21.317095 | orchestrator | Saturday 12 July 2025 19:57:55 +0000 (0:00:00.233) 0:03:07.914 ********* 2025-07-12 19:58:21.317099 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:21.317102 | orchestrator | 2025-07-12 19:58:21.317106 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-12 19:58:21.317110 | orchestrator | 2025-07-12 19:58:21.317114 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-12 19:58:21.317118 | orchestrator | Saturday 12 July 2025 19:58:02 +0000 (0:00:06.408) 0:03:14.322 ********* 2025-07-12 19:58:21.317121 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:21.317125 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:21.317129 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:21.317133 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:21.317137 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:21.317141 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:21.317144 | orchestrator | 2025-07-12 19:58:21.317151 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-12 19:58:21.317155 | orchestrator | Saturday 12 July 2025 19:58:02 +0000 (0:00:00.749) 0:03:15.072 ********* 2025-07-12 19:58:21.317159 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 19:58:21.317162 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 19:58:21.317166 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 19:58:21.317170 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 19:58:21.317174 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 19:58:21.317178 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 19:58:21.317182 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 19:58:21.317185 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 19:58:21.317189 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 19:58:21.317193 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 19:58:21.317197 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 19:58:21.317201 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 19:58:21.317205 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 19:58:21.317208 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 19:58:21.317212 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 19:58:21.317216 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 19:58:21.317225 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 19:58:21.317229 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 19:58:21.317233 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 19:58:21.317237 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 19:58:21.317244 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 19:58:21.317247 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 19:58:21.317251 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 19:58:21.317255 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 19:58:21.317259 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 19:58:21.317263 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 19:58:21.317267 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 19:58:21.317271 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 19:58:21.317274 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 19:58:21.317278 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 19:58:21.317282 | orchestrator | 2025-07-12 19:58:21.317286 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-12 19:58:21.317290 | orchestrator | Saturday 12 July 2025 19:58:17 +0000 (0:00:14.725) 0:03:29.797 ********* 2025-07-12 19:58:21.317294 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.317298 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.317302 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.317305 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.317309 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.317313 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.317317 | orchestrator | 2025-07-12 19:58:21.317321 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-12 19:58:21.317325 | orchestrator | Saturday 12 July 2025 19:58:18 +0000 (0:00:00.453) 0:03:30.251 ********* 2025-07-12 19:58:21.317331 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:21.317337 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:21.317343 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:21.317350 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:21.317355 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:21.317362 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:21.317369 | orchestrator | 2025-07-12 19:58:21.317375 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:58:21.317381 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:58:21.317386 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-07-12 19:58:21.317393 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 19:58:21.317397 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 19:58:21.317401 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 19:58:21.317405 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 19:58:21.317409 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 19:58:21.317413 | orchestrator | 2025-07-12 19:58:21.317417 | orchestrator | 2025-07-12 19:58:21.317423 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:58:21.317427 | orchestrator | Saturday 12 July 2025 19:58:18 +0000 (0:00:00.522) 0:03:30.773 ********* 2025-07-12 19:58:21.317431 | orchestrator | =============================================================================== 2025-07-12 19:58:21.317435 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.89s 2025-07-12 19:58:21.317439 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.35s 2025-07-12 19:58:21.317443 | orchestrator | Manage labels ---------------------------------------------------------- 14.73s 2025-07-12 19:58:21.317447 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.76s 2025-07-12 19:58:21.317451 | orchestrator | kubectl : Install required packages ------------------------------------ 10.71s 2025-07-12 19:58:21.317455 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.41s 2025-07-12 19:58:21.317460 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.32s 2025-07-12 19:58:21.317464 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.22s 2025-07-12 19:58:21.317468 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.18s 2025-07-12 19:58:21.317472 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.34s 2025-07-12 19:58:21.317476 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.17s 2025-07-12 19:58:21.317480 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.15s 2025-07-12 19:58:21.317484 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.93s 2025-07-12 19:58:21.317487 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.86s 2025-07-12 19:58:21.317491 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.76s 2025-07-12 19:58:21.317495 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.69s 2025-07-12 19:58:21.317499 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.64s 2025-07-12 19:58:21.317503 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2025-07-12 19:58:21.317506 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.39s 2025-07-12 19:58:21.317510 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.38s 2025-07-12 19:58:21.317514 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task 71985acf-6fdc-40f1-b6e3-615aa556e1dd is in state STARTED 2025-07-12 19:58:21.317518 | orchestrator | 2025-07-12 19:58:21 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:21.317522 | orchestrator | 2025-07-12 19:58:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:24.345100 | orchestrator | 2025-07-12 19:58:24 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:24.346350 | orchestrator | 2025-07-12 19:58:24 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:24.349691 | orchestrator | 2025-07-12 19:58:24 | INFO  | Task abbc9fe0-7de0-4cee-904d-ab276286debc is in state STARTED 2025-07-12 19:58:24.352954 | orchestrator | 2025-07-12 19:58:24 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:24.353001 | orchestrator | 2025-07-12 19:58:24 | INFO  | Task 71985acf-6fdc-40f1-b6e3-615aa556e1dd is in state STARTED 2025-07-12 19:58:24.353233 | orchestrator | 2025-07-12 19:58:24 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:24.354117 | orchestrator | 2025-07-12 19:58:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:27.403352 | orchestrator | 2025-07-12 19:58:27 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:27.405166 | orchestrator | 2025-07-12 19:58:27 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:27.405633 | orchestrator | 2025-07-12 19:58:27 | INFO  | Task abbc9fe0-7de0-4cee-904d-ab276286debc is in state STARTED 2025-07-12 19:58:27.407782 | orchestrator | 2025-07-12 19:58:27 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:27.408952 | orchestrator | 2025-07-12 19:58:27 | INFO  | Task 71985acf-6fdc-40f1-b6e3-615aa556e1dd is in state SUCCESS 2025-07-12 19:58:27.408981 | orchestrator | 2025-07-12 19:58:27 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:27.408993 | orchestrator | 2025-07-12 19:58:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:30.445042 | orchestrator | 2025-07-12 19:58:30 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:30.446774 | orchestrator | 2025-07-12 19:58:30 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:30.449774 | orchestrator | 2025-07-12 19:58:30 | INFO  | Task abbc9fe0-7de0-4cee-904d-ab276286debc is in state SUCCESS 2025-07-12 19:58:30.449828 | orchestrator | 2025-07-12 19:58:30 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:30.450616 | orchestrator | 2025-07-12 19:58:30 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:30.450794 | orchestrator | 2025-07-12 19:58:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:33.487521 | orchestrator | 2025-07-12 19:58:33 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:33.488489 | orchestrator | 2025-07-12 19:58:33 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:33.490366 | orchestrator | 2025-07-12 19:58:33 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:33.493613 | orchestrator | 2025-07-12 19:58:33 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:33.493667 | orchestrator | 2025-07-12 19:58:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:36.533127 | orchestrator | 2025-07-12 19:58:36 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:36.534671 | orchestrator | 2025-07-12 19:58:36 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state STARTED 2025-07-12 19:58:36.537676 | orchestrator | 2025-07-12 19:58:36 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:36.541114 | orchestrator | 2025-07-12 19:58:36 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:36.541208 | orchestrator | 2025-07-12 19:58:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:39.573606 | orchestrator | 2025-07-12 19:58:39 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:39.575043 | orchestrator | 2025-07-12 19:58:39.575561 | orchestrator | 2025-07-12 19:58:39.575587 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-12 19:58:39.575600 | orchestrator | 2025-07-12 19:58:39.575611 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 19:58:39.575622 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:00.158) 0:00:00.158 ********* 2025-07-12 19:58:39.575634 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 19:58:39.575701 | orchestrator | 2025-07-12 19:58:39.575714 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 19:58:39.576197 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:00.893) 0:00:01.052 ********* 2025-07-12 19:58:39.576221 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:39.576233 | orchestrator | 2025-07-12 19:58:39.576244 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-12 19:58:39.576255 | orchestrator | Saturday 12 July 2025 19:58:25 +0000 (0:00:01.167) 0:00:02.219 ********* 2025-07-12 19:58:39.576266 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:39.576278 | orchestrator | 2025-07-12 19:58:39.576292 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:58:39.576310 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:58:39.576326 | orchestrator | 2025-07-12 19:58:39.576338 | orchestrator | 2025-07-12 19:58:39.576349 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:58:39.576360 | orchestrator | Saturday 12 July 2025 19:58:25 +0000 (0:00:00.429) 0:00:02.648 ********* 2025-07-12 19:58:39.576370 | orchestrator | =============================================================================== 2025-07-12 19:58:39.576381 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.17s 2025-07-12 19:58:39.576392 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2025-07-12 19:58:39.576403 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.43s 2025-07-12 19:58:39.576413 | orchestrator | 2025-07-12 19:58:39.576424 | orchestrator | 2025-07-12 19:58:39.576435 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 19:58:39.576445 | orchestrator | 2025-07-12 19:58:39.576456 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 19:58:39.576467 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-07-12 19:58:39.576477 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:39.576489 | orchestrator | 2025-07-12 19:58:39.576499 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 19:58:39.576510 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:00.552) 0:00:00.716 ********* 2025-07-12 19:58:39.576520 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:39.576531 | orchestrator | 2025-07-12 19:58:39.576542 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 19:58:39.576552 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:00.569) 0:00:01.286 ********* 2025-07-12 19:58:39.576563 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 19:58:39.576573 | orchestrator | 2025-07-12 19:58:39.576584 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 19:58:39.576594 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.721) 0:00:02.007 ********* 2025-07-12 19:58:39.576605 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:39.576616 | orchestrator | 2025-07-12 19:58:39.576707 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 19:58:39.576720 | orchestrator | Saturday 12 July 2025 19:58:25 +0000 (0:00:01.135) 0:00:03.142 ********* 2025-07-12 19:58:39.576731 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:39.576742 | orchestrator | 2025-07-12 19:58:39.576753 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 19:58:39.576764 | orchestrator | Saturday 12 July 2025 19:58:26 +0000 (0:00:00.728) 0:00:03.870 ********* 2025-07-12 19:58:39.576775 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 19:58:39.576786 | orchestrator | 2025-07-12 19:58:39.576797 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 19:58:39.576807 | orchestrator | Saturday 12 July 2025 19:58:27 +0000 (0:00:01.515) 0:00:05.386 ********* 2025-07-12 19:58:39.576818 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 19:58:39.576829 | orchestrator | 2025-07-12 19:58:39.576840 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 19:58:39.576879 | orchestrator | Saturday 12 July 2025 19:58:28 +0000 (0:00:00.772) 0:00:06.158 ********* 2025-07-12 19:58:39.576891 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:39.576929 | orchestrator | 2025-07-12 19:58:39.576941 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 19:58:39.576952 | orchestrator | Saturday 12 July 2025 19:58:28 +0000 (0:00:00.427) 0:00:06.586 ********* 2025-07-12 19:58:39.576963 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:39.576974 | orchestrator | 2025-07-12 19:58:39.576985 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:58:39.576997 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:58:39.577008 | orchestrator | 2025-07-12 19:58:39.577019 | orchestrator | 2025-07-12 19:58:39.577029 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:58:39.577040 | orchestrator | Saturday 12 July 2025 19:58:29 +0000 (0:00:00.348) 0:00:06.934 ********* 2025-07-12 19:58:39.577051 | orchestrator | =============================================================================== 2025-07-12 19:58:39.577062 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-07-12 19:58:39.577073 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.14s 2025-07-12 19:58:39.577084 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.77s 2025-07-12 19:58:39.577110 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-07-12 19:58:39.577122 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-07-12 19:58:39.577133 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2025-07-12 19:58:39.577144 | orchestrator | Get home directory of operator user ------------------------------------- 0.55s 2025-07-12 19:58:39.577154 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2025-07-12 19:58:39.577165 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.35s 2025-07-12 19:58:39.577175 | orchestrator | 2025-07-12 19:58:39.577187 | orchestrator | 2025-07-12 19:58:39 | INFO  | Task eeb3c2d6-fe39-46b4-874c-cbfb88c4eb31 is in state SUCCESS 2025-07-12 19:58:39.577197 | orchestrator | 2025-07-12 19:58:39.577208 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 19:58:39.577218 | orchestrator | 2025-07-12 19:58:39.577229 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 19:58:39.577240 | orchestrator | Saturday 12 July 2025 19:57:24 +0000 (0:00:00.388) 0:00:00.388 ********* 2025-07-12 19:58:39.577250 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:39.577261 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:39.577271 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:39.577282 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:39.577292 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:39.577303 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:39.577313 | orchestrator | 2025-07-12 19:58:39.577324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 19:58:39.577336 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:01.087) 0:00:01.476 ********* 2025-07-12 19:58:39.577349 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 19:58:39.577361 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 19:58:39.577373 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 19:58:39.577386 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 19:58:39.577398 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 19:58:39.577411 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 19:58:39.577423 | orchestrator | 2025-07-12 19:58:39.577442 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-12 19:58:39.577455 | orchestrator | 2025-07-12 19:58:39.577466 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-12 19:58:39.577478 | orchestrator | Saturday 12 July 2025 19:57:26 +0000 (0:00:01.159) 0:00:02.636 ********* 2025-07-12 19:58:39.577492 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 19:58:39.577506 | orchestrator | 2025-07-12 19:58:39.577602 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 19:58:39.577618 | orchestrator | Saturday 12 July 2025 19:57:28 +0000 (0:00:02.197) 0:00:04.834 ********* 2025-07-12 19:58:39.577629 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 19:58:39.577641 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 19:58:39.577652 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 19:58:39.577662 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 19:58:39.577744 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 19:58:39.577756 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 19:58:39.577767 | orchestrator | 2025-07-12 19:58:39.577777 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 19:58:39.577788 | orchestrator | Saturday 12 July 2025 19:57:30 +0000 (0:00:02.006) 0:00:06.841 ********* 2025-07-12 19:58:39.577799 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 19:58:39.577810 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 19:58:39.577820 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 19:58:39.577831 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 19:58:39.577849 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 19:58:39.577860 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 19:58:39.577871 | orchestrator | 2025-07-12 19:58:39.577882 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 19:58:39.577893 | orchestrator | Saturday 12 July 2025 19:57:33 +0000 (0:00:02.698) 0:00:09.539 ********* 2025-07-12 19:58:39.577927 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-12 19:58:39.577938 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:39.577949 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-12 19:58:39.577960 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:39.577970 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-12 19:58:39.577981 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:39.577991 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-12 19:58:39.578002 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:39.578013 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-12 19:58:39.578072 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:39.578083 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-12 19:58:39.578094 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:39.578105 | orchestrator | 2025-07-12 19:58:39.578116 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-12 19:58:39.578127 | orchestrator | Saturday 12 July 2025 19:57:34 +0000 (0:00:01.426) 0:00:10.966 ********* 2025-07-12 19:58:39.578150 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:39.578162 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:39.578172 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:39.578183 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:39.578194 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:39.578205 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:39.578215 | orchestrator | 2025-07-12 19:58:39.578226 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-12 19:58:39.578246 | orchestrator | Saturday 12 July 2025 19:57:35 +0000 (0:00:00.772) 0:00:11.739 ********* 2025-07-12 19:58:39.578260 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578278 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578338 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578455 | orchestrator | 2025-07-12 19:58:39.578467 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-12 19:58:39.578478 | orchestrator | Saturday 12 July 2025 19:57:37 +0000 (0:00:01.620) 0:00:13.360 ********* 2025-07-12 19:58:39.578490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578502 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578677 | orchestrator | 2025-07-12 19:58:39.578688 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-12 19:58:39.578699 | orchestrator | Saturday 12 July 2025 19:57:40 +0000 (0:00:03.095) 0:00:16.456 ********* 2025-07-12 19:58:39.578710 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:39.578722 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:39.578732 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:39.578743 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:39.578754 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:39.578765 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:39.578776 | orchestrator | 2025-07-12 19:58:39.578787 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-12 19:58:39.578798 | orchestrator | Saturday 12 July 2025 19:57:42 +0000 (0:00:01.781) 0:00:18.237 ********* 2025-07-12 19:58:39.578809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.578991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.579002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 19:58:39.579013 | orchestrator | 2025-07-12 19:58:39.579025 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 19:58:39.579036 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:03.160) 0:00:21.398 ********* 2025-07-12 19:58:39.579047 | orchestrator | 2025-07-12 19:58:39.579058 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 19:58:39.579069 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:00.137) 0:00:21.535 ********* 2025-07-12 19:58:39.579080 | orchestrator | 2025-07-12 19:58:39.579091 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 19:58:39.579102 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:00.134) 0:00:21.670 ********* 2025-07-12 19:58:39.579113 | orchestrator | 2025-07-12 19:58:39.579123 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 19:58:39.579134 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:00.303) 0:00:21.974 ********* 2025-07-12 19:58:39.579145 | orchestrator | 2025-07-12 19:58:39.579156 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 19:58:39.579173 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.204) 0:00:22.178 ********* 2025-07-12 19:58:39.579184 | orchestrator | 2025-07-12 19:58:39.579195 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 19:58:39.579205 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.346) 0:00:22.524 ********* 2025-07-12 19:58:39.579216 | orchestrator | 2025-07-12 19:58:39.579227 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-12 19:58:39.579238 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.740) 0:00:23.264 ********* 2025-07-12 19:58:39.579249 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:39.579260 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:39.579271 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:39.579282 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:39.579292 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:39.579303 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:39.579314 | orchestrator | 2025-07-12 19:58:39.579325 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-12 19:58:39.579336 | orchestrator | Saturday 12 July 2025 19:58:01 +0000 (0:00:14.150) 0:00:37.415 ********* 2025-07-12 19:58:39.579347 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:39.579358 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:39.579369 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:39.579380 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:39.579390 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:39.579401 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:39.579412 | orchestrator | 2025-07-12 19:58:39.579423 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 19:58:39.579433 | orchestrator | Saturday 12 July 2025 19:58:05 +0000 (0:00:03.807) 0:00:41.222 ********* 2025-07-12 19:58:39.579444 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:39.579455 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:39.579466 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:39.579477 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:39.579487 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:39.579498 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:39.579509 | orchestrator | 2025-07-12 19:58:39.579519 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-12 19:58:39.579530 | orchestrator | Saturday 12 July 2025 19:58:11 +0000 (0:00:06.484) 0:00:47.706 ********* 2025-07-12 19:58:39.579547 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-12 19:58:39.579558 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-12 19:58:39.579570 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-12 19:58:39.579581 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-12 19:58:39.579591 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-12 19:58:39.579602 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-12 19:58:39.579613 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-12 19:58:39.579624 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-12 19:58:39.579635 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-12 19:58:39.579646 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-12 19:58:39.579656 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-12 19:58:39.579674 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-12 19:58:39.579685 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 19:58:39.579696 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 19:58:39.579706 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 19:58:39.579717 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 19:58:39.579728 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 19:58:39.579739 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 19:58:39.579749 | orchestrator | 2025-07-12 19:58:39.579760 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-12 19:58:39.579771 | orchestrator | Saturday 12 July 2025 19:58:20 +0000 (0:00:08.674) 0:00:56.380 ********* 2025-07-12 19:58:39.579782 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-12 19:58:39.579793 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:39.579804 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-12 19:58:39.579815 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:39.579826 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-12 19:58:39.579836 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:39.579847 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-12 19:58:39.579858 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-12 19:58:39.579869 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-12 19:58:39.579880 | orchestrator | 2025-07-12 19:58:39.579891 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-12 19:58:39.579930 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:03.240) 0:00:59.621 ********* 2025-07-12 19:58:39.579943 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-12 19:58:39.579954 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:39.579964 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-12 19:58:39.580708 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:39.580731 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-12 19:58:39.580744 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:39.580756 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-12 19:58:39.580769 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-12 19:58:39.580781 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-12 19:58:39.580793 | orchestrator | 2025-07-12 19:58:39.580806 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 19:58:39.580818 | orchestrator | Saturday 12 July 2025 19:58:28 +0000 (0:00:04.911) 0:01:04.532 ********* 2025-07-12 19:58:39.580830 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:39.580843 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:39.580855 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:39.580867 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:39.580880 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:39.580892 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:39.580925 | orchestrator | 2025-07-12 19:58:39.580938 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:58:39.580951 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:58:39.580985 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:58:39.580999 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:58:39.581011 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 19:58:39.581024 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 19:58:39.581037 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 19:58:39.581053 | orchestrator | 2025-07-12 19:58:39.581064 | orchestrator | 2025-07-12 19:58:39.581075 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:58:39.581086 | orchestrator | Saturday 12 July 2025 19:58:36 +0000 (0:00:08.487) 0:01:13.020 ********* 2025-07-12 19:58:39.581097 | orchestrator | =============================================================================== 2025-07-12 19:58:39.581108 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.97s 2025-07-12 19:58:39.581119 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 14.15s 2025-07-12 19:58:39.581130 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.68s 2025-07-12 19:58:39.581141 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.91s 2025-07-12 19:58:39.581151 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 3.81s 2025-07-12 19:58:39.581163 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.24s 2025-07-12 19:58:39.581173 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.16s 2025-07-12 19:58:39.581184 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.10s 2025-07-12 19:58:39.581195 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.70s 2025-07-12 19:58:39.581206 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.20s 2025-07-12 19:58:39.581217 | orchestrator | module-load : Load modules ---------------------------------------------- 2.01s 2025-07-12 19:58:39.581227 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.87s 2025-07-12 19:58:39.581238 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.78s 2025-07-12 19:58:39.581249 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.62s 2025-07-12 19:58:39.581260 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.43s 2025-07-12 19:58:39.581270 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.16s 2025-07-12 19:58:39.581281 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.09s 2025-07-12 19:58:39.581292 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2025-07-12 19:58:39.581303 | orchestrator | 2025-07-12 19:58:39 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:39.581314 | orchestrator | 2025-07-12 19:58:39 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:39.581325 | orchestrator | 2025-07-12 19:58:39 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:39.581336 | orchestrator | 2025-07-12 19:58:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:42.609579 | orchestrator | 2025-07-12 19:58:42 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:42.610273 | orchestrator | 2025-07-12 19:58:42 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:42.612915 | orchestrator | 2025-07-12 19:58:42 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:42.615372 | orchestrator | 2025-07-12 19:58:42 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:42.615421 | orchestrator | 2025-07-12 19:58:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:45.647975 | orchestrator | 2025-07-12 19:58:45 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:45.650803 | orchestrator | 2025-07-12 19:58:45 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:45.651752 | orchestrator | 2025-07-12 19:58:45 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:45.652766 | orchestrator | 2025-07-12 19:58:45 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:45.653094 | orchestrator | 2025-07-12 19:58:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:48.691659 | orchestrator | 2025-07-12 19:58:48 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:48.692251 | orchestrator | 2025-07-12 19:58:48 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:48.692859 | orchestrator | 2025-07-12 19:58:48 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:48.693870 | orchestrator | 2025-07-12 19:58:48 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:48.693991 | orchestrator | 2025-07-12 19:58:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:51.740351 | orchestrator | 2025-07-12 19:58:51 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:51.742502 | orchestrator | 2025-07-12 19:58:51 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:51.744622 | orchestrator | 2025-07-12 19:58:51 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:51.745735 | orchestrator | 2025-07-12 19:58:51 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:51.745931 | orchestrator | 2025-07-12 19:58:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:54.789860 | orchestrator | 2025-07-12 19:58:54 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:54.791516 | orchestrator | 2025-07-12 19:58:54 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:54.792665 | orchestrator | 2025-07-12 19:58:54 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:54.794462 | orchestrator | 2025-07-12 19:58:54 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:54.794523 | orchestrator | 2025-07-12 19:58:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:58:57.832419 | orchestrator | 2025-07-12 19:58:57 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:58:57.832529 | orchestrator | 2025-07-12 19:58:57 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:58:57.833282 | orchestrator | 2025-07-12 19:58:57 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:58:57.834369 | orchestrator | 2025-07-12 19:58:57 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:58:57.834423 | orchestrator | 2025-07-12 19:58:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:00.877519 | orchestrator | 2025-07-12 19:59:00 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:00.881082 | orchestrator | 2025-07-12 19:59:00 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:00.881810 | orchestrator | 2025-07-12 19:59:00 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:00.885757 | orchestrator | 2025-07-12 19:59:00 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:00.885814 | orchestrator | 2025-07-12 19:59:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:03.927577 | orchestrator | 2025-07-12 19:59:03 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:03.930314 | orchestrator | 2025-07-12 19:59:03 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:03.931948 | orchestrator | 2025-07-12 19:59:03 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:03.933886 | orchestrator | 2025-07-12 19:59:03 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:03.933977 | orchestrator | 2025-07-12 19:59:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:06.981295 | orchestrator | 2025-07-12 19:59:06 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:06.981429 | orchestrator | 2025-07-12 19:59:06 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:06.981658 | orchestrator | 2025-07-12 19:59:06 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:06.982865 | orchestrator | 2025-07-12 19:59:06 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:06.982906 | orchestrator | 2025-07-12 19:59:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:10.029738 | orchestrator | 2025-07-12 19:59:10 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:10.031378 | orchestrator | 2025-07-12 19:59:10 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:10.032787 | orchestrator | 2025-07-12 19:59:10 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:10.036537 | orchestrator | 2025-07-12 19:59:10 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:10.036608 | orchestrator | 2025-07-12 19:59:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:13.086126 | orchestrator | 2025-07-12 19:59:13 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:13.087304 | orchestrator | 2025-07-12 19:59:13 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:13.088795 | orchestrator | 2025-07-12 19:59:13 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:13.090392 | orchestrator | 2025-07-12 19:59:13 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:13.090464 | orchestrator | 2025-07-12 19:59:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:16.136075 | orchestrator | 2025-07-12 19:59:16 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:16.136885 | orchestrator | 2025-07-12 19:59:16 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:16.136945 | orchestrator | 2025-07-12 19:59:16 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:16.137789 | orchestrator | 2025-07-12 19:59:16 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:16.137918 | orchestrator | 2025-07-12 19:59:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:19.186217 | orchestrator | 2025-07-12 19:59:19 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:19.199866 | orchestrator | 2025-07-12 19:59:19 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:19.203837 | orchestrator | 2025-07-12 19:59:19 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:19.206617 | orchestrator | 2025-07-12 19:59:19 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:19.207014 | orchestrator | 2025-07-12 19:59:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:22.256337 | orchestrator | 2025-07-12 19:59:22 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:22.258477 | orchestrator | 2025-07-12 19:59:22 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:22.261284 | orchestrator | 2025-07-12 19:59:22 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:22.263515 | orchestrator | 2025-07-12 19:59:22 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:22.263995 | orchestrator | 2025-07-12 19:59:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:25.309678 | orchestrator | 2025-07-12 19:59:25 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:25.310087 | orchestrator | 2025-07-12 19:59:25 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:25.311092 | orchestrator | 2025-07-12 19:59:25 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:25.312050 | orchestrator | 2025-07-12 19:59:25 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:25.312088 | orchestrator | 2025-07-12 19:59:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:28.352870 | orchestrator | 2025-07-12 19:59:28 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:28.354785 | orchestrator | 2025-07-12 19:59:28 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:28.354825 | orchestrator | 2025-07-12 19:59:28 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:28.356159 | orchestrator | 2025-07-12 19:59:28 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:28.356191 | orchestrator | 2025-07-12 19:59:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:31.399441 | orchestrator | 2025-07-12 19:59:31 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:31.401781 | orchestrator | 2025-07-12 19:59:31 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:31.404032 | orchestrator | 2025-07-12 19:59:31 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:31.405892 | orchestrator | 2025-07-12 19:59:31 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:31.405920 | orchestrator | 2025-07-12 19:59:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:34.444272 | orchestrator | 2025-07-12 19:59:34 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:34.444756 | orchestrator | 2025-07-12 19:59:34 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:34.445849 | orchestrator | 2025-07-12 19:59:34 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:34.450156 | orchestrator | 2025-07-12 19:59:34 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:34.450203 | orchestrator | 2025-07-12 19:59:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:37.481361 | orchestrator | 2025-07-12 19:59:37 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:37.481447 | orchestrator | 2025-07-12 19:59:37 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:37.488401 | orchestrator | 2025-07-12 19:59:37 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:37.488629 | orchestrator | 2025-07-12 19:59:37 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:37.488652 | orchestrator | 2025-07-12 19:59:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:40.511621 | orchestrator | 2025-07-12 19:59:40 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:40.512420 | orchestrator | 2025-07-12 19:59:40 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:40.513429 | orchestrator | 2025-07-12 19:59:40 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:40.514179 | orchestrator | 2025-07-12 19:59:40 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:40.514204 | orchestrator | 2025-07-12 19:59:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:43.543169 | orchestrator | 2025-07-12 19:59:43 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:43.543255 | orchestrator | 2025-07-12 19:59:43 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:43.543741 | orchestrator | 2025-07-12 19:59:43 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:43.545144 | orchestrator | 2025-07-12 19:59:43 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:43.545179 | orchestrator | 2025-07-12 19:59:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:46.576178 | orchestrator | 2025-07-12 19:59:46 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:46.576389 | orchestrator | 2025-07-12 19:59:46 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:46.578726 | orchestrator | 2025-07-12 19:59:46 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:46.580811 | orchestrator | 2025-07-12 19:59:46 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:46.581223 | orchestrator | 2025-07-12 19:59:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:49.626127 | orchestrator | 2025-07-12 19:59:49 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:49.628989 | orchestrator | 2025-07-12 19:59:49 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:49.631061 | orchestrator | 2025-07-12 19:59:49 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:49.632682 | orchestrator | 2025-07-12 19:59:49 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:49.633069 | orchestrator | 2025-07-12 19:59:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:52.672585 | orchestrator | 2025-07-12 19:59:52 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:52.675049 | orchestrator | 2025-07-12 19:59:52 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:52.677576 | orchestrator | 2025-07-12 19:59:52 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:52.679375 | orchestrator | 2025-07-12 19:59:52 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:52.679657 | orchestrator | 2025-07-12 19:59:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:55.724326 | orchestrator | 2025-07-12 19:59:55 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:55.724410 | orchestrator | 2025-07-12 19:59:55 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:55.724424 | orchestrator | 2025-07-12 19:59:55 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:55.724452 | orchestrator | 2025-07-12 19:59:55 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:55.724464 | orchestrator | 2025-07-12 19:59:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 19:59:58.763329 | orchestrator | 2025-07-12 19:59:58 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 19:59:58.765392 | orchestrator | 2025-07-12 19:59:58 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 19:59:58.767471 | orchestrator | 2025-07-12 19:59:58 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 19:59:58.769124 | orchestrator | 2025-07-12 19:59:58 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 19:59:58.769264 | orchestrator | 2025-07-12 19:59:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:01.796743 | orchestrator | 2025-07-12 20:00:01 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:01.797204 | orchestrator | 2025-07-12 20:00:01 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:01.797811 | orchestrator | 2025-07-12 20:00:01 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 20:00:01.798712 | orchestrator | 2025-07-12 20:00:01 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:01.798737 | orchestrator | 2025-07-12 20:00:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:04.828172 | orchestrator | 2025-07-12 20:00:04 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:04.829189 | orchestrator | 2025-07-12 20:00:04 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:04.830993 | orchestrator | 2025-07-12 20:00:04 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 20:00:04.831545 | orchestrator | 2025-07-12 20:00:04 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:04.831806 | orchestrator | 2025-07-12 20:00:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:07.870718 | orchestrator | 2025-07-12 20:00:07 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:07.874702 | orchestrator | 2025-07-12 20:00:07 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:07.878584 | orchestrator | 2025-07-12 20:00:07 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 20:00:07.880697 | orchestrator | 2025-07-12 20:00:07 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:07.881121 | orchestrator | 2025-07-12 20:00:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:10.908882 | orchestrator | 2025-07-12 20:00:10 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:10.909313 | orchestrator | 2025-07-12 20:00:10 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:10.909836 | orchestrator | 2025-07-12 20:00:10 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 20:00:10.910803 | orchestrator | 2025-07-12 20:00:10 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:10.910878 | orchestrator | 2025-07-12 20:00:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:13.947088 | orchestrator | 2025-07-12 20:00:13 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:13.947188 | orchestrator | 2025-07-12 20:00:13 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:13.947843 | orchestrator | 2025-07-12 20:00:13 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 20:00:13.948305 | orchestrator | 2025-07-12 20:00:13 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:13.948327 | orchestrator | 2025-07-12 20:00:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:16.983004 | orchestrator | 2025-07-12 20:00:16 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:16.984184 | orchestrator | 2025-07-12 20:00:16 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:16.985775 | orchestrator | 2025-07-12 20:00:16 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state STARTED 2025-07-12 20:00:16.987294 | orchestrator | 2025-07-12 20:00:16 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:16.987334 | orchestrator | 2025-07-12 20:00:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:20.028311 | orchestrator | 2025-07-12 20:00:20 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:20.031571 | orchestrator | 2025-07-12 20:00:20 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:20.033904 | orchestrator | 2025-07-12 20:00:20 | INFO  | Task 493a28eb-207c-433f-bf7d-a6cdc72ffeea is in state SUCCESS 2025-07-12 20:00:20.034159 | orchestrator | 2025-07-12 20:00:20.035863 | orchestrator | 2025-07-12 20:00:20.035895 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-12 20:00:20.035907 | orchestrator | 2025-07-12 20:00:20.035918 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 20:00:20.035930 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:00.433) 0:00:00.433 ********* 2025-07-12 20:00:20.035991 | orchestrator | ok: [localhost] => { 2025-07-12 20:00:20.036013 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-12 20:00:20.036032 | orchestrator | } 2025-07-12 20:00:20.036054 | orchestrator | 2025-07-12 20:00:20.036075 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-12 20:00:20.036095 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:00.086) 0:00:00.520 ********* 2025-07-12 20:00:20.036118 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-12 20:00:20.036139 | orchestrator | ...ignoring 2025-07-12 20:00:20.036152 | orchestrator | 2025-07-12 20:00:20.036163 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-12 20:00:20.036201 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:02.974) 0:00:03.494 ********* 2025-07-12 20:00:20.036213 | orchestrator | skipping: [localhost] 2025-07-12 20:00:20.036224 | orchestrator | 2025-07-12 20:00:20.036235 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-12 20:00:20.036246 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.061) 0:00:03.555 ********* 2025-07-12 20:00:20.036257 | orchestrator | ok: [localhost] 2025-07-12 20:00:20.036268 | orchestrator | 2025-07-12 20:00:20.036279 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:00:20.036289 | orchestrator | 2025-07-12 20:00:20.036300 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:00:20.036311 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.177) 0:00:03.733 ********* 2025-07-12 20:00:20.036322 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:20.036333 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:20.036344 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:20.036355 | orchestrator | 2025-07-12 20:00:20.036366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:00:20.036377 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.377) 0:00:04.110 ********* 2025-07-12 20:00:20.036387 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-12 20:00:20.036399 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-12 20:00:20.036410 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-12 20:00:20.036420 | orchestrator | 2025-07-12 20:00:20.036431 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-12 20:00:20.036442 | orchestrator | 2025-07-12 20:00:20.036453 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 20:00:20.036464 | orchestrator | Saturday 12 July 2025 19:57:53 +0000 (0:00:01.048) 0:00:05.159 ********* 2025-07-12 20:00:20.036478 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:00:20.036491 | orchestrator | 2025-07-12 20:00:20.036504 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 20:00:20.036517 | orchestrator | Saturday 12 July 2025 19:57:54 +0000 (0:00:00.802) 0:00:05.961 ********* 2025-07-12 20:00:20.036530 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:20.036543 | orchestrator | 2025-07-12 20:00:20.036556 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-12 20:00:20.036569 | orchestrator | Saturday 12 July 2025 19:57:55 +0000 (0:00:01.150) 0:00:07.112 ********* 2025-07-12 20:00:20.036582 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.036596 | orchestrator | 2025-07-12 20:00:20.036609 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-12 20:00:20.036622 | orchestrator | Saturday 12 July 2025 19:57:56 +0000 (0:00:00.650) 0:00:07.762 ********* 2025-07-12 20:00:20.036635 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.036648 | orchestrator | 2025-07-12 20:00:20.036660 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-12 20:00:20.036673 | orchestrator | Saturday 12 July 2025 19:57:56 +0000 (0:00:00.487) 0:00:08.249 ********* 2025-07-12 20:00:20.036685 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.036698 | orchestrator | 2025-07-12 20:00:20.036711 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-12 20:00:20.036724 | orchestrator | Saturday 12 July 2025 19:57:57 +0000 (0:00:00.464) 0:00:08.713 ********* 2025-07-12 20:00:20.036737 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.036750 | orchestrator | 2025-07-12 20:00:20.036763 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 20:00:20.036776 | orchestrator | Saturday 12 July 2025 19:57:58 +0000 (0:00:00.689) 0:00:09.403 ********* 2025-07-12 20:00:20.036789 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-07-12 20:00:20.036810 | orchestrator | 2025-07-12 20:00:20.036822 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 20:00:20.036848 | orchestrator | Saturday 12 July 2025 19:57:59 +0000 (0:00:01.625) 0:00:11.029 ********* 2025-07-12 20:00:20.036860 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:20.036871 | orchestrator | 2025-07-12 20:00:20.036882 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-12 20:00:20.036893 | orchestrator | Saturday 12 July 2025 19:58:00 +0000 (0:00:00.936) 0:00:11.966 ********* 2025-07-12 20:00:20.036904 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.036915 | orchestrator | 2025-07-12 20:00:20.036926 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-12 20:00:20.036960 | orchestrator | Saturday 12 July 2025 19:58:01 +0000 (0:00:00.897) 0:00:12.864 ********* 2025-07-12 20:00:20.036982 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.037001 | orchestrator | 2025-07-12 20:00:20.037041 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-12 20:00:20.037063 | orchestrator | Saturday 12 July 2025 19:58:02 +0000 (0:00:01.060) 0:00:13.924 ********* 2025-07-12 20:00:20.037088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037145 | orchestrator | 2025-07-12 20:00:20.037157 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-12 20:00:20.037168 | orchestrator | Saturday 12 July 2025 19:58:04 +0000 (0:00:02.350) 0:00:16.275 ********* 2025-07-12 20:00:20.037194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037233 | orchestrator | 2025-07-12 20:00:20.037244 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-12 20:00:20.037255 | orchestrator | Saturday 12 July 2025 19:58:09 +0000 (0:00:04.286) 0:00:20.562 ********* 2025-07-12 20:00:20.037272 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 20:00:20.037283 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 20:00:20.037294 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 20:00:20.037305 | orchestrator | 2025-07-12 20:00:20.037316 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-12 20:00:20.037326 | orchestrator | Saturday 12 July 2025 19:58:11 +0000 (0:00:02.413) 0:00:22.975 ********* 2025-07-12 20:00:20.037337 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 20:00:20.037348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 20:00:20.037359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 20:00:20.037386 | orchestrator | 2025-07-12 20:00:20.037397 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-12 20:00:20.037408 | orchestrator | Saturday 12 July 2025 19:58:14 +0000 (0:00:02.961) 0:00:25.936 ********* 2025-07-12 20:00:20.037424 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 20:00:20.037435 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 20:00:20.037446 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 20:00:20.037457 | orchestrator | 2025-07-12 20:00:20.037468 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-12 20:00:20.037479 | orchestrator | Saturday 12 July 2025 19:58:16 +0000 (0:00:02.230) 0:00:28.167 ********* 2025-07-12 20:00:20.037496 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 20:00:20.037508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 20:00:20.037519 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 20:00:20.037530 | orchestrator | 2025-07-12 20:00:20.037541 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-12 20:00:20.037552 | orchestrator | Saturday 12 July 2025 19:58:19 +0000 (0:00:02.461) 0:00:30.628 ********* 2025-07-12 20:00:20.037562 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 20:00:20.037573 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 20:00:20.037584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 20:00:20.037595 | orchestrator | 2025-07-12 20:00:20.037606 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-12 20:00:20.037617 | orchestrator | Saturday 12 July 2025 19:58:21 +0000 (0:00:02.437) 0:00:33.066 ********* 2025-07-12 20:00:20.037628 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 20:00:20.037639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 20:00:20.037650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 20:00:20.037661 | orchestrator | 2025-07-12 20:00:20.037672 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 20:00:20.037683 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:02.169) 0:00:35.235 ********* 2025-07-12 20:00:20.037693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.037704 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:20.037715 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:20.037726 | orchestrator | 2025-07-12 20:00:20.037743 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-12 20:00:20.037754 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.701) 0:00:35.937 ********* 2025-07-12 20:00:20.037766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:00:20.037817 | orchestrator | 2025-07-12 20:00:20.037828 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-12 20:00:20.037839 | orchestrator | Saturday 12 July 2025 19:58:26 +0000 (0:00:01.865) 0:00:37.802 ********* 2025-07-12 20:00:20.037850 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:20.037861 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:20.037872 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:20.037883 | orchestrator | 2025-07-12 20:00:20.037894 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-12 20:00:20.037905 | orchestrator | Saturday 12 July 2025 19:58:27 +0000 (0:00:01.030) 0:00:38.833 ********* 2025-07-12 20:00:20.037924 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:20.037957 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:20.037970 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:20.037990 | orchestrator | 2025-07-12 20:00:20.038008 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-12 20:00:20.038091 | orchestrator | Saturday 12 July 2025 19:58:37 +0000 (0:00:09.836) 0:00:48.670 ********* 2025-07-12 20:00:20.038103 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:20.038115 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:20.038125 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:20.038136 | orchestrator | 2025-07-12 20:00:20.038148 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 20:00:20.038158 | orchestrator | 2025-07-12 20:00:20.038169 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 20:00:20.038180 | orchestrator | Saturday 12 July 2025 19:58:37 +0000 (0:00:00.555) 0:00:49.225 ********* 2025-07-12 20:00:20.038191 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:20.038321 | orchestrator | 2025-07-12 20:00:20.038338 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 20:00:20.038349 | orchestrator | Saturday 12 July 2025 19:58:38 +0000 (0:00:00.643) 0:00:49.869 ********* 2025-07-12 20:00:20.038360 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:20.038371 | orchestrator | 2025-07-12 20:00:20.038382 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 20:00:20.038393 | orchestrator | Saturday 12 July 2025 19:58:38 +0000 (0:00:00.232) 0:00:50.102 ********* 2025-07-12 20:00:20.038403 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:20.038414 | orchestrator | 2025-07-12 20:00:20.038425 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 20:00:20.038436 | orchestrator | Saturday 12 July 2025 19:58:40 +0000 (0:00:02.158) 0:00:52.260 ********* 2025-07-12 20:00:20.038447 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:20.038458 | orchestrator | 2025-07-12 20:00:20.038469 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 20:00:20.038480 | orchestrator | 2025-07-12 20:00:20.038491 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 20:00:20.038502 | orchestrator | Saturday 12 July 2025 19:59:35 +0000 (0:00:54.825) 0:01:47.085 ********* 2025-07-12 20:00:20.038513 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:20.038524 | orchestrator | 2025-07-12 20:00:20.038535 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 20:00:20.038546 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.624) 0:01:47.710 ********* 2025-07-12 20:00:20.038556 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:20.038568 | orchestrator | 2025-07-12 20:00:20.038579 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 20:00:20.038590 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.296) 0:01:48.006 ********* 2025-07-12 20:00:20.038601 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:20.038612 | orchestrator | 2025-07-12 20:00:20.038623 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 20:00:20.038634 | orchestrator | Saturday 12 July 2025 19:59:38 +0000 (0:00:01.871) 0:01:49.878 ********* 2025-07-12 20:00:20.038645 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:20.038656 | orchestrator | 2025-07-12 20:00:20.038666 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 20:00:20.038677 | orchestrator | 2025-07-12 20:00:20.038688 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 20:00:20.038699 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:16.544) 0:02:06.422 ********* 2025-07-12 20:00:20.038710 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:20.038721 | orchestrator | 2025-07-12 20:00:20.038739 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 20:00:20.038760 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.572) 0:02:06.995 ********* 2025-07-12 20:00:20.038771 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:20.038782 | orchestrator | 2025-07-12 20:00:20.038793 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 20:00:20.038902 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.243) 0:02:07.239 ********* 2025-07-12 20:00:20.038918 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:20.038929 | orchestrator | 2025-07-12 20:00:20.038997 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 20:00:20.039029 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:01.690) 0:02:08.930 ********* 2025-07-12 20:00:20.039041 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:20.039052 | orchestrator | 2025-07-12 20:00:20.039063 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-12 20:00:20.039074 | orchestrator | 2025-07-12 20:00:20.039085 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-12 20:00:20.039096 | orchestrator | Saturday 12 July 2025 20:00:13 +0000 (0:00:16.010) 0:02:24.940 ********* 2025-07-12 20:00:20.039106 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:00:20.039117 | orchestrator | 2025-07-12 20:00:20.039128 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-12 20:00:20.039139 | orchestrator | Saturday 12 July 2025 20:00:14 +0000 (0:00:00.554) 0:02:25.495 ********* 2025-07-12 20:00:20.039150 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:00:20.039160 | orchestrator | enable_outward_rabbitmq_True 2025-07-12 20:00:20.039171 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:00:20.039182 | orchestrator | outward_rabbitmq_restart 2025-07-12 20:00:20.039193 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:20.039204 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:20.039215 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:20.039226 | orchestrator | 2025-07-12 20:00:20.039237 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-12 20:00:20.039248 | orchestrator | skipping: no hosts matched 2025-07-12 20:00:20.039259 | orchestrator | 2025-07-12 20:00:20.039270 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-12 20:00:20.039281 | orchestrator | skipping: no hosts matched 2025-07-12 20:00:20.039291 | orchestrator | 2025-07-12 20:00:20.039303 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-12 20:00:20.039313 | orchestrator | skipping: no hosts matched 2025-07-12 20:00:20.039324 | orchestrator | 2025-07-12 20:00:20.039335 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:00:20.039347 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 20:00:20.039359 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:00:20.039370 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:00:20.039381 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:00:20.039392 | orchestrator | 2025-07-12 20:00:20.039403 | orchestrator | 2025-07-12 20:00:20.039414 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:00:20.039425 | orchestrator | Saturday 12 July 2025 20:00:16 +0000 (0:00:02.546) 0:02:28.041 ********* 2025-07-12 20:00:20.039436 | orchestrator | =============================================================================== 2025-07-12 20:00:20.039447 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 87.38s 2025-07-12 20:00:20.039467 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 9.84s 2025-07-12 20:00:20.039478 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.72s 2025-07-12 20:00:20.039489 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.29s 2025-07-12 20:00:20.039500 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.97s 2025-07-12 20:00:20.039511 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.96s 2025-07-12 20:00:20.039522 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.55s 2025-07-12 20:00:20.039533 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.46s 2025-07-12 20:00:20.039547 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.44s 2025-07-12 20:00:20.039559 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.41s 2025-07-12 20:00:20.039572 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.35s 2025-07-12 20:00:20.039585 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.23s 2025-07-12 20:00:20.039598 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.17s 2025-07-12 20:00:20.039609 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.87s 2025-07-12 20:00:20.039619 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.84s 2025-07-12 20:00:20.039630 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.63s 2025-07-12 20:00:20.039641 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.15s 2025-07-12 20:00:20.039665 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.06s 2025-07-12 20:00:20.039677 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.05s 2025-07-12 20:00:20.039688 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.03s 2025-07-12 20:00:20.039699 | orchestrator | 2025-07-12 20:00:20 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:20.039710 | orchestrator | 2025-07-12 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:23.074915 | orchestrator | 2025-07-12 20:00:23 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:23.075287 | orchestrator | 2025-07-12 20:00:23 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:23.075976 | orchestrator | 2025-07-12 20:00:23 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:23.076083 | orchestrator | 2025-07-12 20:00:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:26.108548 | orchestrator | 2025-07-12 20:00:26 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:26.110813 | orchestrator | 2025-07-12 20:00:26 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:26.112801 | orchestrator | 2025-07-12 20:00:26 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:26.113002 | orchestrator | 2025-07-12 20:00:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:29.162798 | orchestrator | 2025-07-12 20:00:29 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:29.162891 | orchestrator | 2025-07-12 20:00:29 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:29.162907 | orchestrator | 2025-07-12 20:00:29 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:29.162919 | orchestrator | 2025-07-12 20:00:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:32.191354 | orchestrator | 2025-07-12 20:00:32 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:32.191572 | orchestrator | 2025-07-12 20:00:32 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:32.193880 | orchestrator | 2025-07-12 20:00:32 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:32.193922 | orchestrator | 2025-07-12 20:00:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:35.229883 | orchestrator | 2025-07-12 20:00:35 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:35.230795 | orchestrator | 2025-07-12 20:00:35 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:35.231853 | orchestrator | 2025-07-12 20:00:35 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:35.231889 | orchestrator | 2025-07-12 20:00:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:38.272824 | orchestrator | 2025-07-12 20:00:38 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:38.275986 | orchestrator | 2025-07-12 20:00:38 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:38.278836 | orchestrator | 2025-07-12 20:00:38 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:38.279214 | orchestrator | 2025-07-12 20:00:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:41.314383 | orchestrator | 2025-07-12 20:00:41 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:41.315585 | orchestrator | 2025-07-12 20:00:41 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:41.316282 | orchestrator | 2025-07-12 20:00:41 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:41.316307 | orchestrator | 2025-07-12 20:00:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:44.371100 | orchestrator | 2025-07-12 20:00:44 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:44.373590 | orchestrator | 2025-07-12 20:00:44 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:44.375366 | orchestrator | 2025-07-12 20:00:44 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:44.376044 | orchestrator | 2025-07-12 20:00:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:47.422713 | orchestrator | 2025-07-12 20:00:47 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:47.422846 | orchestrator | 2025-07-12 20:00:47 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:47.422871 | orchestrator | 2025-07-12 20:00:47 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:47.422890 | orchestrator | 2025-07-12 20:00:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:50.462541 | orchestrator | 2025-07-12 20:00:50 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:50.463345 | orchestrator | 2025-07-12 20:00:50 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:50.465010 | orchestrator | 2025-07-12 20:00:50 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:50.465046 | orchestrator | 2025-07-12 20:00:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:53.501524 | orchestrator | 2025-07-12 20:00:53 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:53.501766 | orchestrator | 2025-07-12 20:00:53 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:53.501813 | orchestrator | 2025-07-12 20:00:53 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:53.501833 | orchestrator | 2025-07-12 20:00:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:56.547811 | orchestrator | 2025-07-12 20:00:56 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:56.549831 | orchestrator | 2025-07-12 20:00:56 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:56.551800 | orchestrator | 2025-07-12 20:00:56 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:56.552064 | orchestrator | 2025-07-12 20:00:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:00:59.586805 | orchestrator | 2025-07-12 20:00:59 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:00:59.594226 | orchestrator | 2025-07-12 20:00:59 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:00:59.597872 | orchestrator | 2025-07-12 20:00:59 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:00:59.597906 | orchestrator | 2025-07-12 20:00:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:02.627869 | orchestrator | 2025-07-12 20:01:02 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:02.628068 | orchestrator | 2025-07-12 20:01:02 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:02.631762 | orchestrator | 2025-07-12 20:01:02 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state STARTED 2025-07-12 20:01:02.631828 | orchestrator | 2025-07-12 20:01:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:05.671621 | orchestrator | 2025-07-12 20:01:05 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:05.672093 | orchestrator | 2025-07-12 20:01:05 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:05.673717 | orchestrator | 2025-07-12 20:01:05 | INFO  | Task 058f9d75-05ea-4ddb-8bea-32880e6c3919 is in state SUCCESS 2025-07-12 20:01:05.673747 | orchestrator | 2025-07-12 20:01:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:05.675208 | orchestrator | 2025-07-12 20:01:05.675254 | orchestrator | 2025-07-12 20:01:05.675268 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:01:05.675280 | orchestrator | 2025-07-12 20:01:05.675569 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:01:05.675582 | orchestrator | Saturday 12 July 2025 19:58:42 +0000 (0:00:00.171) 0:00:00.171 ********* 2025-07-12 20:01:05.675594 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:05.675606 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:05.675618 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:05.675629 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.675640 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.675651 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.675662 | orchestrator | 2025-07-12 20:01:05.675673 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:01:05.675684 | orchestrator | Saturday 12 July 2025 19:58:43 +0000 (0:00:00.671) 0:00:00.842 ********* 2025-07-12 20:01:05.675696 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-12 20:01:05.675707 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-12 20:01:05.675719 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-12 20:01:05.675730 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-12 20:01:05.675763 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-12 20:01:05.675775 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-12 20:01:05.675786 | orchestrator | 2025-07-12 20:01:05.675798 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-12 20:01:05.675809 | orchestrator | 2025-07-12 20:01:05.675890 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-12 20:01:05.675910 | orchestrator | Saturday 12 July 2025 19:58:44 +0000 (0:00:01.032) 0:00:01.875 ********* 2025-07-12 20:01:05.675923 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:01:05.675935 | orchestrator | 2025-07-12 20:01:05.675976 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-12 20:01:05.675988 | orchestrator | Saturday 12 July 2025 19:58:45 +0000 (0:00:01.234) 0:00:03.109 ********* 2025-07-12 20:01:05.676001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676039 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676073 | orchestrator | 2025-07-12 20:01:05.676096 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-12 20:01:05.676108 | orchestrator | Saturday 12 July 2025 19:58:47 +0000 (0:00:01.731) 0:00:04.841 ********* 2025-07-12 20:01:05.676120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676146 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676203 | orchestrator | 2025-07-12 20:01:05.676215 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-12 20:01:05.676226 | orchestrator | Saturday 12 July 2025 19:58:49 +0000 (0:00:01.676) 0:00:06.518 ********* 2025-07-12 20:01:05.676237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676248 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676268 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676325 | orchestrator | 2025-07-12 20:01:05.676336 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-12 20:01:05.676347 | orchestrator | Saturday 12 July 2025 19:58:50 +0000 (0:00:01.466) 0:00:07.985 ********* 2025-07-12 20:01:05.676359 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676433 | orchestrator | 2025-07-12 20:01:05.676449 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-12 20:01:05.676461 | orchestrator | Saturday 12 July 2025 19:58:52 +0000 (0:00:01.587) 0:00:09.572 ********* 2025-07-12 20:01:05.676472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676488 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.676545 | orchestrator | 2025-07-12 20:01:05.676556 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-12 20:01:05.676567 | orchestrator | Saturday 12 July 2025 19:58:53 +0000 (0:00:01.461) 0:00:11.034 ********* 2025-07-12 20:01:05.676578 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:05.676590 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:05.676601 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:05.676612 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.676623 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.676634 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.676645 | orchestrator | 2025-07-12 20:01:05.676662 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-12 20:01:05.676674 | orchestrator | Saturday 12 July 2025 19:58:56 +0000 (0:00:02.853) 0:00:13.888 ********* 2025-07-12 20:01:05.676685 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-12 20:01:05.676696 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-12 20:01:05.676707 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-12 20:01:05.676718 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-12 20:01:05.676728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-12 20:01:05.676739 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-12 20:01:05.676750 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:01:05.676761 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:01:05.676778 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:01:05.676789 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:01:05.676800 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:01:05.676811 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:01:05.676822 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:01:05.676834 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:01:05.676845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:01:05.676856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:01:05.676872 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:01:05.676883 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:01:05.676894 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:01:05.676906 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:01:05.676917 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:01:05.676928 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:01:05.676960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:01:05.676973 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:01:05.676984 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:01:05.676995 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:01:05.677006 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:01:05.677017 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:01:05.677035 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:01:05.677046 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:01:05.677057 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:01:05.677068 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:01:05.677079 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:01:05.677090 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 20:01:05.677102 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:01:05.677113 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 20:01:05.677124 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:01:05.677135 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:01:05.677146 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 20:01:05.677157 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-12 20:01:05.677169 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-12 20:01:05.677180 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 20:01:05.677191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 20:01:05.677202 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 20:01:05.677219 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-12 20:01:05.677230 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 20:01:05.677241 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 20:01:05.677252 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-12 20:01:05.677263 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-12 20:01:05.677275 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-12 20:01:05.677286 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 20:01:05.677297 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 20:01:05.677312 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 20:01:05.677324 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 20:01:05.677335 | orchestrator | 2025-07-12 20:01:05.677346 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:01:05.677357 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:18.712) 0:00:32.600 ********* 2025-07-12 20:01:05.677374 | orchestrator | 2025-07-12 20:01:05.677386 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:01:05.677397 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:00.064) 0:00:32.665 ********* 2025-07-12 20:01:05.677407 | orchestrator | 2025-07-12 20:01:05.677418 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:01:05.677429 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:00.088) 0:00:32.754 ********* 2025-07-12 20:01:05.677440 | orchestrator | 2025-07-12 20:01:05.677452 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:01:05.677463 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:00.068) 0:00:32.822 ********* 2025-07-12 20:01:05.677474 | orchestrator | 2025-07-12 20:01:05.677485 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:01:05.677496 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:00.070) 0:00:32.893 ********* 2025-07-12 20:01:05.677507 | orchestrator | 2025-07-12 20:01:05.677518 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:01:05.677529 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:00.078) 0:00:32.971 ********* 2025-07-12 20:01:05.677540 | orchestrator | 2025-07-12 20:01:05.677551 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-12 20:01:05.677562 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:00.066) 0:00:33.038 ********* 2025-07-12 20:01:05.677573 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:05.677584 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:05.677595 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.677606 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:05.677617 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.677628 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.677639 | orchestrator | 2025-07-12 20:01:05.677650 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-12 20:01:05.677661 | orchestrator | Saturday 12 July 2025 19:59:17 +0000 (0:00:01.977) 0:00:35.015 ********* 2025-07-12 20:01:05.677672 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.677684 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.677695 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:05.677706 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.677717 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:05.677728 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:05.677739 | orchestrator | 2025-07-12 20:01:05.677750 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-12 20:01:05.677761 | orchestrator | 2025-07-12 20:01:05.677772 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 20:01:05.677783 | orchestrator | Saturday 12 July 2025 19:59:51 +0000 (0:00:33.463) 0:01:08.478 ********* 2025-07-12 20:01:05.677794 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:01:05.677805 | orchestrator | 2025-07-12 20:01:05.677816 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 20:01:05.677827 | orchestrator | Saturday 12 July 2025 19:59:51 +0000 (0:00:00.492) 0:01:08.971 ********* 2025-07-12 20:01:05.677838 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:01:05.677850 | orchestrator | 2025-07-12 20:01:05.677861 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-12 20:01:05.677872 | orchestrator | Saturday 12 July 2025 19:59:52 +0000 (0:00:00.695) 0:01:09.667 ********* 2025-07-12 20:01:05.677883 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.677894 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.677905 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.677916 | orchestrator | 2025-07-12 20:01:05.677927 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-12 20:01:05.677961 | orchestrator | Saturday 12 July 2025 19:59:53 +0000 (0:00:00.880) 0:01:10.547 ********* 2025-07-12 20:01:05.677990 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.678011 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.678088 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.678108 | orchestrator | 2025-07-12 20:01:05.678120 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-12 20:01:05.678131 | orchestrator | Saturday 12 July 2025 19:59:53 +0000 (0:00:00.464) 0:01:11.011 ********* 2025-07-12 20:01:05.678141 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.678152 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.678163 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.678174 | orchestrator | 2025-07-12 20:01:05.678184 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-12 20:01:05.678195 | orchestrator | Saturday 12 July 2025 19:59:53 +0000 (0:00:00.334) 0:01:11.345 ********* 2025-07-12 20:01:05.678206 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.678217 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.678227 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.678238 | orchestrator | 2025-07-12 20:01:05.678249 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-12 20:01:05.678260 | orchestrator | Saturday 12 July 2025 19:59:54 +0000 (0:00:00.413) 0:01:11.758 ********* 2025-07-12 20:01:05.678271 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.678282 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.678293 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.678303 | orchestrator | 2025-07-12 20:01:05.678314 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-12 20:01:05.678325 | orchestrator | Saturday 12 July 2025 19:59:54 +0000 (0:00:00.303) 0:01:12.062 ********* 2025-07-12 20:01:05.678342 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678353 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678364 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678375 | orchestrator | 2025-07-12 20:01:05.678386 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-12 20:01:05.678396 | orchestrator | Saturday 12 July 2025 19:59:54 +0000 (0:00:00.270) 0:01:12.332 ********* 2025-07-12 20:01:05.678407 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678418 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678429 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678440 | orchestrator | 2025-07-12 20:01:05.678451 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-12 20:01:05.678462 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.266) 0:01:12.599 ********* 2025-07-12 20:01:05.678473 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678483 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678494 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678505 | orchestrator | 2025-07-12 20:01:05.678516 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-12 20:01:05.678527 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.395) 0:01:12.995 ********* 2025-07-12 20:01:05.678538 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678548 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678559 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678570 | orchestrator | 2025-07-12 20:01:05.678581 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-12 20:01:05.678592 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.261) 0:01:13.257 ********* 2025-07-12 20:01:05.678603 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678613 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678624 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678635 | orchestrator | 2025-07-12 20:01:05.678646 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-12 20:01:05.678657 | orchestrator | Saturday 12 July 2025 19:59:56 +0000 (0:00:00.304) 0:01:13.561 ********* 2025-07-12 20:01:05.678668 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678690 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678701 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678712 | orchestrator | 2025-07-12 20:01:05.678723 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-12 20:01:05.678734 | orchestrator | Saturday 12 July 2025 19:59:56 +0000 (0:00:00.269) 0:01:13.830 ********* 2025-07-12 20:01:05.678745 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678756 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678767 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678777 | orchestrator | 2025-07-12 20:01:05.678788 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-12 20:01:05.678799 | orchestrator | Saturday 12 July 2025 19:59:56 +0000 (0:00:00.375) 0:01:14.206 ********* 2025-07-12 20:01:05.678810 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678821 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678832 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678842 | orchestrator | 2025-07-12 20:01:05.678853 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-12 20:01:05.678864 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:00.269) 0:01:14.476 ********* 2025-07-12 20:01:05.678875 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.678886 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.678897 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.678908 | orchestrator | 2025-07-12 20:01:05.678919 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-12 20:01:05.678930 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:00.283) 0:01:14.759 ********* 2025-07-12 20:01:05.679002 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679016 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679037 | orchestrator | 2025-07-12 20:01:05.679048 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-12 20:01:05.679059 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:00.265) 0:01:15.025 ********* 2025-07-12 20:01:05.679070 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679081 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679092 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679103 | orchestrator | 2025-07-12 20:01:05.679114 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-12 20:01:05.679125 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:00.381) 0:01:15.407 ********* 2025-07-12 20:01:05.679243 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679254 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679274 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679285 | orchestrator | 2025-07-12 20:01:05.679296 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 20:01:05.679307 | orchestrator | Saturday 12 July 2025 19:59:58 +0000 (0:00:00.295) 0:01:15.702 ********* 2025-07-12 20:01:05.679318 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:01:05.679329 | orchestrator | 2025-07-12 20:01:05.679339 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-12 20:01:05.679350 | orchestrator | Saturday 12 July 2025 19:59:58 +0000 (0:00:00.502) 0:01:16.205 ********* 2025-07-12 20:01:05.679361 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.679372 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.679382 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.679393 | orchestrator | 2025-07-12 20:01:05.679404 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-12 20:01:05.679415 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:00.642) 0:01:16.847 ********* 2025-07-12 20:01:05.679425 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.679436 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.679447 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.679467 | orchestrator | 2025-07-12 20:01:05.679478 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-12 20:01:05.679494 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:00.405) 0:01:17.252 ********* 2025-07-12 20:01:05.679504 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679514 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679523 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679533 | orchestrator | 2025-07-12 20:01:05.679542 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-12 20:01:05.679552 | orchestrator | Saturday 12 July 2025 20:00:00 +0000 (0:00:00.371) 0:01:17.624 ********* 2025-07-12 20:01:05.679561 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679571 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679580 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679590 | orchestrator | 2025-07-12 20:01:05.679599 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-12 20:01:05.679609 | orchestrator | Saturday 12 July 2025 20:00:00 +0000 (0:00:00.311) 0:01:17.935 ********* 2025-07-12 20:01:05.679618 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679628 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679637 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679647 | orchestrator | 2025-07-12 20:01:05.679656 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-12 20:01:05.679666 | orchestrator | Saturday 12 July 2025 20:00:00 +0000 (0:00:00.510) 0:01:18.446 ********* 2025-07-12 20:01:05.679676 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679685 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679694 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679704 | orchestrator | 2025-07-12 20:01:05.679714 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-12 20:01:05.679723 | orchestrator | Saturday 12 July 2025 20:00:01 +0000 (0:00:00.322) 0:01:18.769 ********* 2025-07-12 20:01:05.679733 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679742 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679752 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679762 | orchestrator | 2025-07-12 20:01:05.679771 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-12 20:01:05.679781 | orchestrator | Saturday 12 July 2025 20:00:01 +0000 (0:00:00.345) 0:01:19.114 ********* 2025-07-12 20:01:05.679790 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.679800 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.679810 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.679819 | orchestrator | 2025-07-12 20:01:05.679829 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 20:01:05.679838 | orchestrator | Saturday 12 July 2025 20:00:01 +0000 (0:00:00.280) 0:01:19.394 ********* 2025-07-12 20:01:05.679848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679970 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.679980 | orchestrator | 2025-07-12 20:01:05.679990 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 20:01:05.680000 | orchestrator | Saturday 12 July 2025 20:00:03 +0000 (0:00:01.491) 0:01:20.886 ********* 2025-07-12 20:01:05.680010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680117 | orchestrator | 2025-07-12 20:01:05.680127 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 20:01:05.680137 | orchestrator | Saturday 12 July 2025 20:00:07 +0000 (0:00:04.106) 0:01:24.992 ********* 2025-07-12 20:01:05.680147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.680253 | orchestrator | 2025-07-12 20:01:05.680263 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:01:05.680272 | orchestrator | Saturday 12 July 2025 20:00:09 +0000 (0:00:02.011) 0:01:27.004 ********* 2025-07-12 20:01:05.680282 | orchestrator | 2025-07-12 20:01:05.680292 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:01:05.680302 | orchestrator | Saturday 12 July 2025 20:00:09 +0000 (0:00:00.062) 0:01:27.066 ********* 2025-07-12 20:01:05.680311 | orchestrator | 2025-07-12 20:01:05.680321 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:01:05.680331 | orchestrator | Saturday 12 July 2025 20:00:09 +0000 (0:00:00.077) 0:01:27.143 ********* 2025-07-12 20:01:05.680340 | orchestrator | 2025-07-12 20:01:05.680350 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 20:01:05.680360 | orchestrator | Saturday 12 July 2025 20:00:09 +0000 (0:00:00.061) 0:01:27.205 ********* 2025-07-12 20:01:05.680370 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.680380 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.680389 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.680399 | orchestrator | 2025-07-12 20:01:05.680409 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 20:01:05.680418 | orchestrator | Saturday 12 July 2025 20:00:17 +0000 (0:00:07.766) 0:01:34.972 ********* 2025-07-12 20:01:05.680428 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.680443 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.680453 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.680462 | orchestrator | 2025-07-12 20:01:05.680472 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 20:01:05.680482 | orchestrator | Saturday 12 July 2025 20:00:25 +0000 (0:00:07.553) 0:01:42.526 ********* 2025-07-12 20:01:05.680491 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.680501 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.680511 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.680520 | orchestrator | 2025-07-12 20:01:05.680530 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 20:01:05.680540 | orchestrator | Saturday 12 July 2025 20:00:27 +0000 (0:00:02.571) 0:01:45.098 ********* 2025-07-12 20:01:05.680549 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.680559 | orchestrator | 2025-07-12 20:01:05.680569 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 20:01:05.680578 | orchestrator | Saturday 12 July 2025 20:00:27 +0000 (0:00:00.112) 0:01:45.210 ********* 2025-07-12 20:01:05.680588 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.680598 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.680607 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.680617 | orchestrator | 2025-07-12 20:01:05.680627 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 20:01:05.680637 | orchestrator | Saturday 12 July 2025 20:00:28 +0000 (0:00:00.724) 0:01:45.935 ********* 2025-07-12 20:01:05.680646 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.680656 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.680666 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.680676 | orchestrator | 2025-07-12 20:01:05.680685 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 20:01:05.680695 | orchestrator | Saturday 12 July 2025 20:00:29 +0000 (0:00:00.750) 0:01:46.685 ********* 2025-07-12 20:01:05.680705 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.680714 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.680724 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.680734 | orchestrator | 2025-07-12 20:01:05.680744 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 20:01:05.680753 | orchestrator | Saturday 12 July 2025 20:00:29 +0000 (0:00:00.731) 0:01:47.417 ********* 2025-07-12 20:01:05.680763 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.680773 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.680782 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.680792 | orchestrator | 2025-07-12 20:01:05.680802 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 20:01:05.680812 | orchestrator | Saturday 12 July 2025 20:00:30 +0000 (0:00:00.621) 0:01:48.038 ********* 2025-07-12 20:01:05.680821 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.680831 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.680845 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.680855 | orchestrator | 2025-07-12 20:01:05.680865 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 20:01:05.680875 | orchestrator | Saturday 12 July 2025 20:00:31 +0000 (0:00:00.684) 0:01:48.723 ********* 2025-07-12 20:01:05.680884 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.680894 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.680904 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.680913 | orchestrator | 2025-07-12 20:01:05.680923 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-12 20:01:05.680933 | orchestrator | Saturday 12 July 2025 20:00:32 +0000 (0:00:00.974) 0:01:49.698 ********* 2025-07-12 20:01:05.680961 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.680971 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.680981 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.680990 | orchestrator | 2025-07-12 20:01:05.681000 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 20:01:05.681015 | orchestrator | Saturday 12 July 2025 20:00:32 +0000 (0:00:00.266) 0:01:49.964 ********* 2025-07-12 20:01:05.681029 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681040 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681050 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681071 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681081 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681091 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681117 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681127 | orchestrator | 2025-07-12 20:01:05.681137 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 20:01:05.681147 | orchestrator | Saturday 12 July 2025 20:00:33 +0000 (0:00:01.380) 0:01:51.345 ********* 2025-07-12 20:01:05.681162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681176 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681186 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681196 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681227 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681257 | orchestrator | 2025-07-12 20:01:05.681267 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 20:01:05.681276 | orchestrator | Saturday 12 July 2025 20:00:37 +0000 (0:00:03.807) 0:01:55.152 ********* 2025-07-12 20:01:05.681292 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681307 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681318 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681345 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681385 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:01:05.681395 | orchestrator | 2025-07-12 20:01:05.681405 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:01:05.681414 | orchestrator | Saturday 12 July 2025 20:00:40 +0000 (0:00:02.980) 0:01:58.132 ********* 2025-07-12 20:01:05.681424 | orchestrator | 2025-07-12 20:01:05.681434 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:01:05.681449 | orchestrator | Saturday 12 July 2025 20:00:40 +0000 (0:00:00.058) 0:01:58.190 ********* 2025-07-12 20:01:05.681459 | orchestrator | 2025-07-12 20:01:05.681468 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:01:05.681478 | orchestrator | Saturday 12 July 2025 20:00:40 +0000 (0:00:00.058) 0:01:58.249 ********* 2025-07-12 20:01:05.681487 | orchestrator | 2025-07-12 20:01:05.681497 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 20:01:05.681506 | orchestrator | Saturday 12 July 2025 20:00:40 +0000 (0:00:00.072) 0:01:58.321 ********* 2025-07-12 20:01:05.681516 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.681526 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.681536 | orchestrator | 2025-07-12 20:01:05.681550 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 20:01:05.681560 | orchestrator | Saturday 12 July 2025 20:00:47 +0000 (0:00:06.231) 0:02:04.552 ********* 2025-07-12 20:01:05.681569 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.681579 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.681589 | orchestrator | 2025-07-12 20:01:05.681599 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 20:01:05.681608 | orchestrator | Saturday 12 July 2025 20:00:53 +0000 (0:00:06.380) 0:02:10.933 ********* 2025-07-12 20:01:05.681618 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:05.681628 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:05.681637 | orchestrator | 2025-07-12 20:01:05.681647 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 20:01:05.681656 | orchestrator | Saturday 12 July 2025 20:00:59 +0000 (0:00:06.225) 0:02:17.158 ********* 2025-07-12 20:01:05.681666 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:05.681676 | orchestrator | 2025-07-12 20:01:05.681685 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 20:01:05.681694 | orchestrator | Saturday 12 July 2025 20:00:59 +0000 (0:00:00.176) 0:02:17.335 ********* 2025-07-12 20:01:05.681704 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.681714 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.681723 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.681733 | orchestrator | 2025-07-12 20:01:05.681746 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 20:01:05.681756 | orchestrator | Saturday 12 July 2025 20:01:00 +0000 (0:00:01.076) 0:02:18.412 ********* 2025-07-12 20:01:05.681766 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.681776 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.681785 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.681795 | orchestrator | 2025-07-12 20:01:05.681804 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 20:01:05.681814 | orchestrator | Saturday 12 July 2025 20:01:01 +0000 (0:00:00.642) 0:02:19.054 ********* 2025-07-12 20:01:05.681824 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.681833 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.681843 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.681853 | orchestrator | 2025-07-12 20:01:05.681862 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 20:01:05.681872 | orchestrator | Saturday 12 July 2025 20:01:02 +0000 (0:00:00.814) 0:02:19.869 ********* 2025-07-12 20:01:05.681881 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:05.681891 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:05.681901 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:05.681910 | orchestrator | 2025-07-12 20:01:05.681920 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 20:01:05.681930 | orchestrator | Saturday 12 July 2025 20:01:03 +0000 (0:00:00.682) 0:02:20.551 ********* 2025-07-12 20:01:05.681957 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.681975 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.681992 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.682009 | orchestrator | 2025-07-12 20:01:05.682051 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 20:01:05.682068 | orchestrator | Saturday 12 July 2025 20:01:04 +0000 (0:00:01.332) 0:02:21.883 ********* 2025-07-12 20:01:05.682077 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:05.682087 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:05.682097 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:05.682106 | orchestrator | 2025-07-12 20:01:05.682116 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:01:05.682126 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 20:01:05.682136 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 20:01:05.682146 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 20:01:05.682155 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:01:05.682165 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:01:05.682175 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:01:05.682184 | orchestrator | 2025-07-12 20:01:05.682194 | orchestrator | 2025-07-12 20:01:05.682204 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:01:05.682213 | orchestrator | Saturday 12 July 2025 20:01:05 +0000 (0:00:00.945) 0:02:22.829 ********* 2025-07-12 20:01:05.682223 | orchestrator | =============================================================================== 2025-07-12 20:01:05.682232 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 33.46s 2025-07-12 20:01:05.682242 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.71s 2025-07-12 20:01:05.682251 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.00s 2025-07-12 20:01:05.682261 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.93s 2025-07-12 20:01:05.682271 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.80s 2025-07-12 20:01:05.682280 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.11s 2025-07-12 20:01:05.682290 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.81s 2025-07-12 20:01:05.682305 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.98s 2025-07-12 20:01:05.682315 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.85s 2025-07-12 20:01:05.682325 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.01s 2025-07-12 20:01:05.682334 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.98s 2025-07-12 20:01:05.682344 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.73s 2025-07-12 20:01:05.682353 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.68s 2025-07-12 20:01:05.682363 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.59s 2025-07-12 20:01:05.682372 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.49s 2025-07-12 20:01:05.682382 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.47s 2025-07-12 20:01:05.682391 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.46s 2025-07-12 20:01:05.682401 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.38s 2025-07-12 20:01:05.682410 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.33s 2025-07-12 20:01:05.682420 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.23s 2025-07-12 20:01:08.725833 | orchestrator | 2025-07-12 20:01:08 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:08.726475 | orchestrator | 2025-07-12 20:01:08 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:08.726594 | orchestrator | 2025-07-12 20:01:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:11.773831 | orchestrator | 2025-07-12 20:01:11 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:11.773928 | orchestrator | 2025-07-12 20:01:11 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:11.774115 | orchestrator | 2025-07-12 20:01:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:14.814332 | orchestrator | 2025-07-12 20:01:14 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:14.814871 | orchestrator | 2025-07-12 20:01:14 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:14.815001 | orchestrator | 2025-07-12 20:01:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:17.851296 | orchestrator | 2025-07-12 20:01:17 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:17.856873 | orchestrator | 2025-07-12 20:01:17 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:17.856987 | orchestrator | 2025-07-12 20:01:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:20.897897 | orchestrator | 2025-07-12 20:01:20 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:20.898216 | orchestrator | 2025-07-12 20:01:20 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:20.898244 | orchestrator | 2025-07-12 20:01:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:23.942447 | orchestrator | 2025-07-12 20:01:23 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:23.943645 | orchestrator | 2025-07-12 20:01:23 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:23.943664 | orchestrator | 2025-07-12 20:01:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:26.982874 | orchestrator | 2025-07-12 20:01:26 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:26.983476 | orchestrator | 2025-07-12 20:01:26 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:26.983517 | orchestrator | 2025-07-12 20:01:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:30.029074 | orchestrator | 2025-07-12 20:01:30 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:30.029176 | orchestrator | 2025-07-12 20:01:30 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:30.029194 | orchestrator | 2025-07-12 20:01:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:33.080001 | orchestrator | 2025-07-12 20:01:33 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:33.080113 | orchestrator | 2025-07-12 20:01:33 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:33.080130 | orchestrator | 2025-07-12 20:01:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:36.118085 | orchestrator | 2025-07-12 20:01:36 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:36.120469 | orchestrator | 2025-07-12 20:01:36 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:36.120547 | orchestrator | 2025-07-12 20:01:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:39.175309 | orchestrator | 2025-07-12 20:01:39 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:39.175436 | orchestrator | 2025-07-12 20:01:39 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:39.175462 | orchestrator | 2025-07-12 20:01:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:42.234847 | orchestrator | 2025-07-12 20:01:42 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:42.236224 | orchestrator | 2025-07-12 20:01:42 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:42.236252 | orchestrator | 2025-07-12 20:01:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:45.282723 | orchestrator | 2025-07-12 20:01:45 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:45.285854 | orchestrator | 2025-07-12 20:01:45 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:45.285902 | orchestrator | 2025-07-12 20:01:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:48.348004 | orchestrator | 2025-07-12 20:01:48 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:48.351448 | orchestrator | 2025-07-12 20:01:48 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:48.351838 | orchestrator | 2025-07-12 20:01:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:51.398570 | orchestrator | 2025-07-12 20:01:51 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:51.400552 | orchestrator | 2025-07-12 20:01:51 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:51.400595 | orchestrator | 2025-07-12 20:01:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:54.441732 | orchestrator | 2025-07-12 20:01:54 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:54.442441 | orchestrator | 2025-07-12 20:01:54 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:54.442541 | orchestrator | 2025-07-12 20:01:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:01:57.491233 | orchestrator | 2025-07-12 20:01:57 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:01:57.493544 | orchestrator | 2025-07-12 20:01:57 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:01:57.493677 | orchestrator | 2025-07-12 20:01:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:00.533178 | orchestrator | 2025-07-12 20:02:00 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:00.533308 | orchestrator | 2025-07-12 20:02:00 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:00.533336 | orchestrator | 2025-07-12 20:02:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:03.572348 | orchestrator | 2025-07-12 20:02:03 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:03.574735 | orchestrator | 2025-07-12 20:02:03 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:03.574766 | orchestrator | 2025-07-12 20:02:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:06.615480 | orchestrator | 2025-07-12 20:02:06 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:06.618411 | orchestrator | 2025-07-12 20:02:06 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:06.618465 | orchestrator | 2025-07-12 20:02:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:09.665875 | orchestrator | 2025-07-12 20:02:09 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:09.667818 | orchestrator | 2025-07-12 20:02:09 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:09.667873 | orchestrator | 2025-07-12 20:02:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:12.717424 | orchestrator | 2025-07-12 20:02:12 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:12.718478 | orchestrator | 2025-07-12 20:02:12 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:12.718527 | orchestrator | 2025-07-12 20:02:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:15.764175 | orchestrator | 2025-07-12 20:02:15 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:15.764922 | orchestrator | 2025-07-12 20:02:15 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:15.764996 | orchestrator | 2025-07-12 20:02:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:18.810785 | orchestrator | 2025-07-12 20:02:18 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:18.810892 | orchestrator | 2025-07-12 20:02:18 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:18.810908 | orchestrator | 2025-07-12 20:02:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:21.857392 | orchestrator | 2025-07-12 20:02:21 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:21.858829 | orchestrator | 2025-07-12 20:02:21 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:21.859122 | orchestrator | 2025-07-12 20:02:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:24.910280 | orchestrator | 2025-07-12 20:02:24 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:24.910379 | orchestrator | 2025-07-12 20:02:24 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:24.910393 | orchestrator | 2025-07-12 20:02:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:27.960348 | orchestrator | 2025-07-12 20:02:27 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:27.961674 | orchestrator | 2025-07-12 20:02:27 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:27.961708 | orchestrator | 2025-07-12 20:02:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:31.012867 | orchestrator | 2025-07-12 20:02:31 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:31.016196 | orchestrator | 2025-07-12 20:02:31 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:31.016257 | orchestrator | 2025-07-12 20:02:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:34.056066 | orchestrator | 2025-07-12 20:02:34 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:34.058557 | orchestrator | 2025-07-12 20:02:34 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:34.059191 | orchestrator | 2025-07-12 20:02:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:37.097597 | orchestrator | 2025-07-12 20:02:37 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:37.099154 | orchestrator | 2025-07-12 20:02:37 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:37.099363 | orchestrator | 2025-07-12 20:02:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:40.155058 | orchestrator | 2025-07-12 20:02:40 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:40.161819 | orchestrator | 2025-07-12 20:02:40 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:40.161926 | orchestrator | 2025-07-12 20:02:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:43.220180 | orchestrator | 2025-07-12 20:02:43 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:43.222191 | orchestrator | 2025-07-12 20:02:43 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:43.222450 | orchestrator | 2025-07-12 20:02:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:46.267597 | orchestrator | 2025-07-12 20:02:46 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:46.270150 | orchestrator | 2025-07-12 20:02:46 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:46.270397 | orchestrator | 2025-07-12 20:02:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:49.327404 | orchestrator | 2025-07-12 20:02:49 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:49.330098 | orchestrator | 2025-07-12 20:02:49 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:49.330715 | orchestrator | 2025-07-12 20:02:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:52.378155 | orchestrator | 2025-07-12 20:02:52 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:52.380161 | orchestrator | 2025-07-12 20:02:52 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:52.380222 | orchestrator | 2025-07-12 20:02:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:55.421523 | orchestrator | 2025-07-12 20:02:55 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:55.422178 | orchestrator | 2025-07-12 20:02:55 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:55.422209 | orchestrator | 2025-07-12 20:02:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:02:58.481316 | orchestrator | 2025-07-12 20:02:58 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:02:58.482483 | orchestrator | 2025-07-12 20:02:58 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:02:58.482530 | orchestrator | 2025-07-12 20:02:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:01.541346 | orchestrator | 2025-07-12 20:03:01 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:01.541465 | orchestrator | 2025-07-12 20:03:01 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:01.541483 | orchestrator | 2025-07-12 20:03:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:04.591586 | orchestrator | 2025-07-12 20:03:04 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:04.594443 | orchestrator | 2025-07-12 20:03:04 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:04.594527 | orchestrator | 2025-07-12 20:03:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:07.633255 | orchestrator | 2025-07-12 20:03:07 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:07.633357 | orchestrator | 2025-07-12 20:03:07 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:07.633372 | orchestrator | 2025-07-12 20:03:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:10.678715 | orchestrator | 2025-07-12 20:03:10 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:10.679646 | orchestrator | 2025-07-12 20:03:10 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:10.679678 | orchestrator | 2025-07-12 20:03:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:13.726669 | orchestrator | 2025-07-12 20:03:13 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:13.727664 | orchestrator | 2025-07-12 20:03:13 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:13.727992 | orchestrator | 2025-07-12 20:03:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:16.763248 | orchestrator | 2025-07-12 20:03:16 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:16.763358 | orchestrator | 2025-07-12 20:03:16 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:16.763372 | orchestrator | 2025-07-12 20:03:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:19.808779 | orchestrator | 2025-07-12 20:03:19 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:19.809515 | orchestrator | 2025-07-12 20:03:19 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:19.809674 | orchestrator | 2025-07-12 20:03:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:22.855033 | orchestrator | 2025-07-12 20:03:22 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:22.857896 | orchestrator | 2025-07-12 20:03:22 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:22.858083 | orchestrator | 2025-07-12 20:03:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:25.893478 | orchestrator | 2025-07-12 20:03:25 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:25.894280 | orchestrator | 2025-07-12 20:03:25 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:25.894328 | orchestrator | 2025-07-12 20:03:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:28.943773 | orchestrator | 2025-07-12 20:03:28 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:28.946222 | orchestrator | 2025-07-12 20:03:28 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:28.946286 | orchestrator | 2025-07-12 20:03:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:31.997825 | orchestrator | 2025-07-12 20:03:31 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:31.998798 | orchestrator | 2025-07-12 20:03:31 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:31.998842 | orchestrator | 2025-07-12 20:03:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:35.045152 | orchestrator | 2025-07-12 20:03:35 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:35.045325 | orchestrator | 2025-07-12 20:03:35 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:35.045344 | orchestrator | 2025-07-12 20:03:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:38.093784 | orchestrator | 2025-07-12 20:03:38 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:38.094141 | orchestrator | 2025-07-12 20:03:38 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state STARTED 2025-07-12 20:03:38.094299 | orchestrator | 2025-07-12 20:03:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:41.144428 | orchestrator | 2025-07-12 20:03:41 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:41.147499 | orchestrator | 2025-07-12 20:03:41 | INFO  | Task a5757987-7596-48aa-bf56-fd4c39edf323 is in state SUCCESS 2025-07-12 20:03:41.149056 | orchestrator | 2025-07-12 20:03:41.149114 | orchestrator | 2025-07-12 20:03:41.149129 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:03:41.149141 | orchestrator | 2025-07-12 20:03:41.149153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:03:41.149165 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.644) 0:00:00.644 ********* 2025-07-12 20:03:41.149176 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.149188 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.149199 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.149225 | orchestrator | 2025-07-12 20:03:41.149247 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:03:41.149736 | orchestrator | Saturday 12 July 2025 19:57:26 +0000 (0:00:00.522) 0:00:01.167 ********* 2025-07-12 20:03:41.149754 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-12 20:03:41.149766 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-12 20:03:41.149777 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-12 20:03:41.149788 | orchestrator | 2025-07-12 20:03:41.149800 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-12 20:03:41.149811 | orchestrator | 2025-07-12 20:03:41.149822 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 20:03:41.149833 | orchestrator | Saturday 12 July 2025 19:57:27 +0000 (0:00:00.894) 0:00:02.061 ********* 2025-07-12 20:03:41.149845 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.149856 | orchestrator | 2025-07-12 20:03:41.149867 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-12 20:03:41.150296 | orchestrator | Saturday 12 July 2025 19:57:28 +0000 (0:00:01.028) 0:00:03.090 ********* 2025-07-12 20:03:41.150318 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.150334 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.150353 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.150366 | orchestrator | 2025-07-12 20:03:41.150378 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 20:03:41.150388 | orchestrator | Saturday 12 July 2025 19:57:29 +0000 (0:00:01.050) 0:00:04.140 ********* 2025-07-12 20:03:41.150400 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.150410 | orchestrator | 2025-07-12 20:03:41.150421 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-12 20:03:41.150432 | orchestrator | Saturday 12 July 2025 19:57:30 +0000 (0:00:01.344) 0:00:05.485 ********* 2025-07-12 20:03:41.150442 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.150453 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.150464 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.150474 | orchestrator | 2025-07-12 20:03:41.150485 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-12 20:03:41.150520 | orchestrator | Saturday 12 July 2025 19:57:31 +0000 (0:00:00.898) 0:00:06.383 ********* 2025-07-12 20:03:41.150531 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:03:41.150542 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:03:41.150553 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 20:03:41.150565 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:03:41.150576 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:03:41.150587 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 20:03:41.150598 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:03:41.150608 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:03:41.150620 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 20:03:41.150631 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 20:03:41.150642 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 20:03:41.150652 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 20:03:41.150663 | orchestrator | 2025-07-12 20:03:41.150674 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 20:03:41.150685 | orchestrator | Saturday 12 July 2025 19:57:35 +0000 (0:00:04.039) 0:00:10.422 ********* 2025-07-12 20:03:41.150696 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 20:03:41.150720 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 20:03:41.150731 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 20:03:41.150742 | orchestrator | 2025-07-12 20:03:41.150752 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 20:03:41.150763 | orchestrator | Saturday 12 July 2025 19:57:36 +0000 (0:00:00.739) 0:00:11.162 ********* 2025-07-12 20:03:41.150774 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 20:03:41.150785 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 20:03:41.150795 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 20:03:41.150806 | orchestrator | 2025-07-12 20:03:41.150817 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 20:03:41.150828 | orchestrator | Saturday 12 July 2025 19:57:37 +0000 (0:00:01.779) 0:00:12.942 ********* 2025-07-12 20:03:41.150838 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-12 20:03:41.150849 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.150899 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-12 20:03:41.150912 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.150926 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-12 20:03:41.150940 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.150978 | orchestrator | 2025-07-12 20:03:41.150998 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-12 20:03:41.151011 | orchestrator | Saturday 12 July 2025 19:57:38 +0000 (0:00:00.562) 0:00:13.505 ********* 2025-07-12 20:03:41.151711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.151746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.151759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.151771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.151789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.151832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.151846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.151859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.151877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.151889 | orchestrator | 2025-07-12 20:03:41.151900 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-12 20:03:41.151912 | orchestrator | Saturday 12 July 2025 19:57:40 +0000 (0:00:01.854) 0:00:15.359 ********* 2025-07-12 20:03:41.151923 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.151934 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.151944 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.151979 | orchestrator | 2025-07-12 20:03:41.151990 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-12 20:03:41.152849 | orchestrator | Saturday 12 July 2025 19:57:42 +0000 (0:00:01.715) 0:00:17.075 ********* 2025-07-12 20:03:41.152861 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-12 20:03:41.152873 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-12 20:03:41.152884 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-12 20:03:41.152895 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-12 20:03:41.152906 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-12 20:03:41.152917 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-12 20:03:41.152928 | orchestrator | 2025-07-12 20:03:41.152939 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-12 20:03:41.152972 | orchestrator | Saturday 12 July 2025 19:57:44 +0000 (0:00:02.380) 0:00:19.455 ********* 2025-07-12 20:03:41.152984 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.152995 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.153006 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.153017 | orchestrator | 2025-07-12 20:03:41.153027 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-12 20:03:41.153039 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:01.656) 0:00:21.112 ********* 2025-07-12 20:03:41.153049 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.153061 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.153072 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.153082 | orchestrator | 2025-07-12 20:03:41.153093 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-12 20:03:41.153104 | orchestrator | Saturday 12 July 2025 19:57:48 +0000 (0:00:02.329) 0:00:23.441 ********* 2025-07-12 20:03:41.153121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.153284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.153317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.153331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:03:41.153343 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.153357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.153376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.153393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.153410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:03:41.153429 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.153517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.153533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.153545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.153640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:03:41.153653 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.153664 | orchestrator | 2025-07-12 20:03:41.153675 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-12 20:03:41.153687 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:01.276) 0:00:24.718 ********* 2025-07-12 20:03:41.153698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.153716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.153813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.153831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.153843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.153854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:03:41.153866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.153877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.153902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:03:41.154081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.154115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a', '__omit_place_holder__9e907eaa05bda021a1e1bd689664b1621b033e9a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:03:41.154127 | orchestrator | 2025-07-12 20:03:41.154138 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-12 20:03:41.154149 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:03.124) 0:00:27.842 ********* 2025-07-12 20:03:41.154161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.154705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.154716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.154726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.154747 | orchestrator | 2025-07-12 20:03:41.154757 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-12 20:03:41.154767 | orchestrator | Saturday 12 July 2025 19:57:57 +0000 (0:00:04.812) 0:00:32.654 ********* 2025-07-12 20:03:41.154777 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 20:03:41.154793 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 20:03:41.154803 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 20:03:41.154813 | orchestrator | 2025-07-12 20:03:41.155208 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-12 20:03:41.155229 | orchestrator | Saturday 12 July 2025 19:58:00 +0000 (0:00:02.649) 0:00:35.304 ********* 2025-07-12 20:03:41.155239 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 20:03:41.155250 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 20:03:41.155260 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 20:03:41.155271 | orchestrator | 2025-07-12 20:03:41.155337 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-12 20:03:41.155350 | orchestrator | Saturday 12 July 2025 19:58:07 +0000 (0:00:07.267) 0:00:42.572 ********* 2025-07-12 20:03:41.155360 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.155370 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.155381 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.155391 | orchestrator | 2025-07-12 20:03:41.155401 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-12 20:03:41.155411 | orchestrator | Saturday 12 July 2025 19:58:09 +0000 (0:00:02.134) 0:00:44.707 ********* 2025-07-12 20:03:41.155422 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 20:03:41.155433 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 20:03:41.155443 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 20:03:41.155453 | orchestrator | 2025-07-12 20:03:41.155463 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-12 20:03:41.155473 | orchestrator | Saturday 12 July 2025 19:58:13 +0000 (0:00:04.114) 0:00:48.821 ********* 2025-07-12 20:03:41.155484 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 20:03:41.155494 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 20:03:41.155504 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 20:03:41.155514 | orchestrator | 2025-07-12 20:03:41.155525 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-12 20:03:41.155535 | orchestrator | Saturday 12 July 2025 19:58:16 +0000 (0:00:02.886) 0:00:51.708 ********* 2025-07-12 20:03:41.155545 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-12 20:03:41.155555 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-12 20:03:41.155576 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-12 20:03:41.155586 | orchestrator | 2025-07-12 20:03:41.155596 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-12 20:03:41.155606 | orchestrator | Saturday 12 July 2025 19:58:18 +0000 (0:00:02.125) 0:00:53.834 ********* 2025-07-12 20:03:41.155616 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-12 20:03:41.155627 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-12 20:03:41.156645 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-12 20:03:41.158257 | orchestrator | 2025-07-12 20:03:41.158283 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 20:03:41.158298 | orchestrator | Saturday 12 July 2025 19:58:20 +0000 (0:00:01.915) 0:00:55.749 ********* 2025-07-12 20:03:41.158311 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.158324 | orchestrator | 2025-07-12 20:03:41.158337 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-12 20:03:41.158349 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:01.423) 0:00:57.173 ********* 2025-07-12 20:03:41.158363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.158398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.158431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.158444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.158456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.158490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.158503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.158514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.158532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.158543 | orchestrator | 2025-07-12 20:03:41.158556 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-12 20:03:41.158568 | orchestrator | Saturday 12 July 2025 19:58:26 +0000 (0:00:04.435) 0:01:01.608 ********* 2025-07-12 20:03:41.158588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.158601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.158620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.158632 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.158643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.158664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.158681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.158693 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.158704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.158726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.158746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.158757 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.158768 | orchestrator | 2025-07-12 20:03:41.158780 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-12 20:03:41.158791 | orchestrator | Saturday 12 July 2025 19:58:27 +0000 (0:00:00.948) 0:01:02.556 ********* 2025-07-12 20:03:41.158802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.158814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.158826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.158837 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.158853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.158871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.158890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.158902 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.158913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.158925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.158936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.158973 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.158985 | orchestrator | 2025-07-12 20:03:41.158997 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 20:03:41.159008 | orchestrator | Saturday 12 July 2025 19:58:29 +0000 (0:00:02.096) 0:01:04.653 ********* 2025-07-12 20:03:41.159020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.159112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159207 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.159218 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.159229 | orchestrator | 2025-07-12 20:03:41.159247 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 20:03:41.159258 | orchestrator | Saturday 12 July 2025 19:58:31 +0000 (0:00:01.556) 0:01:06.210 ********* 2025-07-12 20:03:41.159274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159309 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.159320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159381 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.159400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159435 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.159446 | orchestrator | 2025-07-12 20:03:41.159457 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 20:03:41.159468 | orchestrator | Saturday 12 July 2025 19:58:31 +0000 (0:00:00.704) 0:01:06.914 ********* 2025-07-12 20:03:41.159479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159531 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.159549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159583 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.159594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159636 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.159647 | orchestrator | 2025-07-12 20:03:41.159658 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-12 20:03:41.159669 | orchestrator | Saturday 12 July 2025 19:58:33 +0000 (0:00:01.267) 0:01:08.181 ********* 2025-07-12 20:03:41.159685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159727 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.159738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159779 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.159795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159836 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.159847 | orchestrator | 2025-07-12 20:03:41.159858 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-12 20:03:41.159869 | orchestrator | Saturday 12 July 2025 19:58:33 +0000 (0:00:00.640) 0:01:08.822 ********* 2025-07-12 20:03:41.159880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.159892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.159903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.159921 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.159932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.160012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.160044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.160056 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.160067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.160079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.160090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.160111 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.160122 | orchestrator | 2025-07-12 20:03:41.160133 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-12 20:03:41.160144 | orchestrator | Saturday 12 July 2025 19:58:34 +0000 (0:00:00.612) 0:01:09.435 ********* 2025-07-12 20:03:41.160155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.160172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.160184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.160195 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.160213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.160225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.160237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.160255 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.160266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:03:41.160278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:03:41.160294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:03:41.160305 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.160316 | orchestrator | 2025-07-12 20:03:41.160327 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-12 20:03:41.160338 | orchestrator | Saturday 12 July 2025 19:58:35 +0000 (0:00:01.230) 0:01:10.666 ********* 2025-07-12 20:03:41.160349 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 20:03:41.160361 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 20:03:41.160378 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 20:03:41.160389 | orchestrator | 2025-07-12 20:03:41.160401 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-12 20:03:41.160411 | orchestrator | Saturday 12 July 2025 19:58:37 +0000 (0:00:01.497) 0:01:12.164 ********* 2025-07-12 20:03:41.160422 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 20:03:41.160433 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 20:03:41.160444 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 20:03:41.160455 | orchestrator | 2025-07-12 20:03:41.160466 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-12 20:03:41.160476 | orchestrator | Saturday 12 July 2025 19:58:38 +0000 (0:00:01.746) 0:01:13.911 ********* 2025-07-12 20:03:41.160487 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:03:41.160498 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:03:41.160509 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:03:41.160527 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.160538 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:03:41.160549 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.160560 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:03:41.160571 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:03:41.160581 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.160592 | orchestrator | 2025-07-12 20:03:41.160603 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-12 20:03:41.160614 | orchestrator | Saturday 12 July 2025 19:58:40 +0000 (0:00:01.654) 0:01:15.566 ********* 2025-07-12 20:03:41.160625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.160636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.160653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:03:41.160671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.160683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.160701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:03:41.160713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.160724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.160736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:03:41.160747 | orchestrator | 2025-07-12 20:03:41.160758 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-12 20:03:41.160769 | orchestrator | Saturday 12 July 2025 19:58:43 +0000 (0:00:03.089) 0:01:18.655 ********* 2025-07-12 20:03:41.160780 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.160791 | orchestrator | 2025-07-12 20:03:41.160802 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-12 20:03:41.160813 | orchestrator | Saturday 12 July 2025 19:58:44 +0000 (0:00:00.944) 0:01:19.599 ********* 2025-07-12 20:03:41.160834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 20:03:41.160854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.160873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.160885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.160897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 20:03:41.160908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.160925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.160942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.160982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 20:03:41.160994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.161005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161028 | orchestrator | 2025-07-12 20:03:41.161039 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-12 20:03:41.161050 | orchestrator | Saturday 12 July 2025 19:58:48 +0000 (0:00:03.896) 0:01:23.496 ********* 2025-07-12 20:03:41.161066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 20:03:41.161085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.161103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161126 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.161137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 20:03:41.161149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.161165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161194 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.161212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 20:03:41.161223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.161235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161257 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.161275 | orchestrator | 2025-07-12 20:03:41.161286 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-12 20:03:41.161297 | orchestrator | Saturday 12 July 2025 19:58:49 +0000 (0:00:00.678) 0:01:24.174 ********* 2025-07-12 20:03:41.161309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:03:41.161321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:03:41.161332 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.161344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:03:41.161365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:03:41.161377 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.161388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:03:41.161399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:03:41.161410 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.161421 | orchestrator | 2025-07-12 20:03:41.161450 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-12 20:03:41.161462 | orchestrator | Saturday 12 July 2025 19:58:50 +0000 (0:00:01.178) 0:01:25.353 ********* 2025-07-12 20:03:41.161473 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.161484 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.161495 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.161506 | orchestrator | 2025-07-12 20:03:41.161516 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-12 20:03:41.161527 | orchestrator | Saturday 12 July 2025 19:58:51 +0000 (0:00:01.484) 0:01:26.837 ********* 2025-07-12 20:03:41.161538 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.161549 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.161560 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.161571 | orchestrator | 2025-07-12 20:03:41.161582 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-12 20:03:41.161593 | orchestrator | Saturday 12 July 2025 19:58:53 +0000 (0:00:02.165) 0:01:29.002 ********* 2025-07-12 20:03:41.161603 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.161614 | orchestrator | 2025-07-12 20:03:41.161625 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-12 20:03:41.161636 | orchestrator | Saturday 12 July 2025 19:58:54 +0000 (0:00:00.722) 0:01:29.724 ********* 2025-07-12 20:03:41.161648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.161660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.161714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.161749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161778 | orchestrator | 2025-07-12 20:03:41.161794 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-12 20:03:41.161806 | orchestrator | Saturday 12 July 2025 19:58:59 +0000 (0:00:05.236) 0:01:34.961 ********* 2025-07-12 20:03:41.161823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.161835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161858 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.161870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.161887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161919 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.161936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.161964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.161989 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.162007 | orchestrator | 2025-07-12 20:03:41.162073 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-12 20:03:41.162087 | orchestrator | Saturday 12 July 2025 19:59:00 +0000 (0:00:00.657) 0:01:35.618 ********* 2025-07-12 20:03:41.162099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:03:41.162112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:03:41.162123 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.162141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:03:41.162153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:03:41.162164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.162175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:03:41.162191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:03:41.162203 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.162214 | orchestrator | 2025-07-12 20:03:41.162225 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-12 20:03:41.162236 | orchestrator | Saturday 12 July 2025 19:59:01 +0000 (0:00:00.944) 0:01:36.563 ********* 2025-07-12 20:03:41.162247 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.162258 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.162268 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.162279 | orchestrator | 2025-07-12 20:03:41.162290 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-12 20:03:41.162301 | orchestrator | Saturday 12 July 2025 19:59:03 +0000 (0:00:01.637) 0:01:38.200 ********* 2025-07-12 20:03:41.162312 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.162322 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.162333 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.162344 | orchestrator | 2025-07-12 20:03:41.162367 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-12 20:03:41.162379 | orchestrator | Saturday 12 July 2025 19:59:05 +0000 (0:00:02.012) 0:01:40.212 ********* 2025-07-12 20:03:41.162390 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.162401 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.162412 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.162422 | orchestrator | 2025-07-12 20:03:41.162433 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-12 20:03:41.162444 | orchestrator | Saturday 12 July 2025 19:59:05 +0000 (0:00:00.295) 0:01:40.508 ********* 2025-07-12 20:03:41.162455 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.162469 | orchestrator | 2025-07-12 20:03:41.162487 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-12 20:03:41.162503 | orchestrator | Saturday 12 July 2025 19:59:06 +0000 (0:00:00.655) 0:01:41.163 ********* 2025-07-12 20:03:41.162516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 20:03:41.162537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 20:03:41.162549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 20:03:41.162560 | orchestrator | 2025-07-12 20:03:41.162571 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-12 20:03:41.162582 | orchestrator | Saturday 12 July 2025 19:59:08 +0000 (0:00:02.645) 0:01:43.809 ********* 2025-07-12 20:03:41.162600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 20:03:41.162612 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.162624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 20:03:41.162642 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.162672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 20:03:41.162685 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.162696 | orchestrator | 2025-07-12 20:03:41.162707 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-12 20:03:41.162718 | orchestrator | Saturday 12 July 2025 19:59:10 +0000 (0:00:01.365) 0:01:45.175 ********* 2025-07-12 20:03:41.162730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:03:41.162742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:03:41.162754 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.162765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:03:41.162782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:03:41.162794 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.162811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:03:41.162823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:03:41.162840 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.162852 | orchestrator | 2025-07-12 20:03:41.162862 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-12 20:03:41.162873 | orchestrator | Saturday 12 July 2025 19:59:11 +0000 (0:00:01.678) 0:01:46.854 ********* 2025-07-12 20:03:41.162884 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.162895 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.162906 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.162917 | orchestrator | 2025-07-12 20:03:41.162928 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-12 20:03:41.162939 | orchestrator | Saturday 12 July 2025 19:59:12 +0000 (0:00:00.913) 0:01:47.767 ********* 2025-07-12 20:03:41.162976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.162996 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.163014 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.163030 | orchestrator | 2025-07-12 20:03:41.163046 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-12 20:03:41.163062 | orchestrator | Saturday 12 July 2025 19:59:13 +0000 (0:00:01.008) 0:01:48.775 ********* 2025-07-12 20:03:41.163080 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.163100 | orchestrator | 2025-07-12 20:03:41.163118 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-12 20:03:41.163135 | orchestrator | Saturday 12 July 2025 19:59:14 +0000 (0:00:00.981) 0:01:49.757 ********* 2025-07-12 20:03:41.163147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.163159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.163218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.163283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163331 | orchestrator | 2025-07-12 20:03:41.163342 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-12 20:03:41.163353 | orchestrator | Saturday 12 July 2025 19:59:18 +0000 (0:00:03.558) 0:01:53.316 ********* 2025-07-12 20:03:41.163369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.163387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163430 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.163441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.163452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163511 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.163531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.163543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.163590 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.163601 | orchestrator | 2025-07-12 20:03:41.163612 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-12 20:03:41.163623 | orchestrator | Saturday 12 July 2025 19:59:19 +0000 (0:00:01.175) 0:01:54.492 ********* 2025-07-12 20:03:41.163634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:03:41.163652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:03:41.163664 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.163675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:03:41.163687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:03:41.163698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:03:41.163709 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.163720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:03:41.163731 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.163742 | orchestrator | 2025-07-12 20:03:41.163753 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-12 20:03:41.163764 | orchestrator | Saturday 12 July 2025 19:59:20 +0000 (0:00:00.970) 0:01:55.462 ********* 2025-07-12 20:03:41.163775 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.163786 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.163797 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.163808 | orchestrator | 2025-07-12 20:03:41.163819 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-12 20:03:41.163830 | orchestrator | Saturday 12 July 2025 19:59:21 +0000 (0:00:01.245) 0:01:56.708 ********* 2025-07-12 20:03:41.163840 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.163851 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.163862 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.163873 | orchestrator | 2025-07-12 20:03:41.163884 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-12 20:03:41.163895 | orchestrator | Saturday 12 July 2025 19:59:24 +0000 (0:00:02.536) 0:01:59.245 ********* 2025-07-12 20:03:41.163906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.163917 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.163928 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.163945 | orchestrator | 2025-07-12 20:03:41.164038 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-12 20:03:41.164052 | orchestrator | Saturday 12 July 2025 19:59:24 +0000 (0:00:00.641) 0:01:59.887 ********* 2025-07-12 20:03:41.164063 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.164074 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.164085 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.164095 | orchestrator | 2025-07-12 20:03:41.164106 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-12 20:03:41.164117 | orchestrator | Saturday 12 July 2025 19:59:25 +0000 (0:00:00.313) 0:02:00.200 ********* 2025-07-12 20:03:41.164128 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.164138 | orchestrator | 2025-07-12 20:03:41.164149 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-12 20:03:41.164160 | orchestrator | Saturday 12 July 2025 19:59:25 +0000 (0:00:00.774) 0:02:00.975 ********* 2025-07-12 20:03:41.164179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:03:41.164200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:03:41.164212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:03:41.164301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:03:41.164313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:03:41.164473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:03:41.164484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164551 | orchestrator | 2025-07-12 20:03:41.164561 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-12 20:03:41.164571 | orchestrator | Saturday 12 July 2025 19:59:30 +0000 (0:00:04.965) 0:02:05.940 ********* 2025-07-12 20:03:41.164588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:03:41.164599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:03:41.164616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:03:41.164678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.164705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:03:41.164715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:03:41.164766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:03:41.164797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164817 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.164828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.164887 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.164897 | orchestrator | 2025-07-12 20:03:41.164907 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-12 20:03:41.164917 | orchestrator | Saturday 12 July 2025 19:59:31 +0000 (0:00:00.834) 0:02:06.775 ********* 2025-07-12 20:03:41.164927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:03:41.164937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:03:41.164966 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.164981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:03:41.164991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:03:41.165001 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.165010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:03:41.165020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:03:41.165030 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.165040 | orchestrator | 2025-07-12 20:03:41.165049 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-12 20:03:41.165059 | orchestrator | Saturday 12 July 2025 19:59:32 +0000 (0:00:00.989) 0:02:07.765 ********* 2025-07-12 20:03:41.165069 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.165078 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.165088 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.165098 | orchestrator | 2025-07-12 20:03:41.165107 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-12 20:03:41.165117 | orchestrator | Saturday 12 July 2025 19:59:34 +0000 (0:00:01.508) 0:02:09.273 ********* 2025-07-12 20:03:41.165127 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.165137 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.165146 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.165156 | orchestrator | 2025-07-12 20:03:41.165166 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-12 20:03:41.165176 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:01.791) 0:02:11.065 ********* 2025-07-12 20:03:41.165185 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.165195 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.165205 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.165214 | orchestrator | 2025-07-12 20:03:41.165235 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-12 20:03:41.165245 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.266) 0:02:11.331 ********* 2025-07-12 20:03:41.165254 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.165264 | orchestrator | 2025-07-12 20:03:41.165274 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-12 20:03:41.165284 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.676) 0:02:12.007 ********* 2025-07-12 20:03:41.165303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:03:41.165318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.165538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:03:41.165568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.165593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:03:41.165614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.165626 | orchestrator | 2025-07-12 20:03:41.165636 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-12 20:03:41.165646 | orchestrator | Saturday 12 July 2025 19:59:40 +0000 (0:00:03.791) 0:02:15.799 ********* 2025-07-12 20:03:41.165666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:03:41.165685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.165696 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.165712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:03:41.165738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.165749 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.165760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:03:41.165786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.165798 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.165808 | orchestrator | 2025-07-12 20:03:41.165819 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-12 20:03:41.165828 | orchestrator | Saturday 12 July 2025 19:59:43 +0000 (0:00:02.668) 0:02:18.467 ********* 2025-07-12 20:03:41.165839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:03:41.165849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:03:41.165860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:03:41.165881 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.165891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:03:41.165901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.165915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:03:41.165932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:03:41.165942 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.165975 | orchestrator | 2025-07-12 20:03:41.165985 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-12 20:03:41.165995 | orchestrator | Saturday 12 July 2025 19:59:46 +0000 (0:00:03.062) 0:02:21.529 ********* 2025-07-12 20:03:41.166005 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.166041 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.166052 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.166061 | orchestrator | 2025-07-12 20:03:41.166071 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-12 20:03:41.166081 | orchestrator | Saturday 12 July 2025 19:59:47 +0000 (0:00:01.484) 0:02:23.014 ********* 2025-07-12 20:03:41.166091 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.166102 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.166112 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.166122 | orchestrator | 2025-07-12 20:03:41.166132 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-12 20:03:41.166141 | orchestrator | Saturday 12 July 2025 19:59:50 +0000 (0:00:02.168) 0:02:25.182 ********* 2025-07-12 20:03:41.166151 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.166161 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.166170 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.166180 | orchestrator | 2025-07-12 20:03:41.166190 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-12 20:03:41.166199 | orchestrator | Saturday 12 July 2025 19:59:50 +0000 (0:00:00.309) 0:02:25.491 ********* 2025-07-12 20:03:41.166209 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.166219 | orchestrator | 2025-07-12 20:03:41.166228 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-12 20:03:41.166238 | orchestrator | Saturday 12 July 2025 19:59:51 +0000 (0:00:00.868) 0:02:26.360 ********* 2025-07-12 20:03:41.166256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:03:41.166267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:03:41.166283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:03:41.166293 | orchestrator | 2025-07-12 20:03:41.166303 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-12 20:03:41.166313 | orchestrator | Saturday 12 July 2025 19:59:54 +0000 (0:00:03.396) 0:02:29.756 ********* 2025-07-12 20:03:41.166329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:03:41.166347 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.166357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:03:41.166367 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.166377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:03:41.166400 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.166410 | orchestrator | 2025-07-12 20:03:41.166420 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-12 20:03:41.166429 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.332) 0:02:30.089 ********* 2025-07-12 20:03:41.166439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:03:41.166449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:03:41.166459 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.166469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:03:41.166479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:03:41.166489 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.166499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:03:41.166509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:03:41.166518 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.166528 | orchestrator | 2025-07-12 20:03:41.166538 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-12 20:03:41.166548 | orchestrator | Saturday 12 July 2025 19:59:55 +0000 (0:00:00.587) 0:02:30.677 ********* 2025-07-12 20:03:41.166558 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.166567 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.166577 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.166586 | orchestrator | 2025-07-12 20:03:41.166596 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-12 20:03:41.166606 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:01.545) 0:02:32.223 ********* 2025-07-12 20:03:41.166616 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.166625 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.166635 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.166645 | orchestrator | 2025-07-12 20:03:41.166655 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-12 20:03:41.166665 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:01.858) 0:02:34.081 ********* 2025-07-12 20:03:41.166674 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.166684 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.166705 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.166715 | orchestrator | 2025-07-12 20:03:41.166744 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-12 20:03:41.166754 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:00.273) 0:02:34.355 ********* 2025-07-12 20:03:41.166764 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.166780 | orchestrator | 2025-07-12 20:03:41.166790 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-12 20:03:41.166800 | orchestrator | Saturday 12 July 2025 20:00:00 +0000 (0:00:00.892) 0:02:35.248 ********* 2025-07-12 20:03:41.166812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:03:41.166836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:03:41.166854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:03:41.166865 | orchestrator | 2025-07-12 20:03:41.166875 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-12 20:03:41.166885 | orchestrator | Saturday 12 July 2025 20:00:03 +0000 (0:00:03.560) 0:02:38.808 ********* 2025-07-12 20:03:41.166907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:03:41.166924 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.166935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:03:41.166982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:03:41.167010 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.167027 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.167044 | orchestrator | 2025-07-12 20:03:41.167054 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-12 20:03:41.167064 | orchestrator | Saturday 12 July 2025 20:00:04 +0000 (0:00:00.846) 0:02:39.655 ********* 2025-07-12 20:03:41.167075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:03:41.167086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:03:41.167096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:03:41.167107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:03:41.167117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:03:41.167132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:03:41.167142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 20:03:41.167158 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.167168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:03:41.167184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:03:41.167195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 20:03:41.167204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.167214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:03:41.167224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:03:41.167235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:03:41.167245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:03:41.167255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 20:03:41.167265 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.167274 | orchestrator | 2025-07-12 20:03:41.167284 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-12 20:03:41.167294 | orchestrator | Saturday 12 July 2025 20:00:05 +0000 (0:00:00.947) 0:02:40.602 ********* 2025-07-12 20:03:41.167304 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.167314 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.167323 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.167333 | orchestrator | 2025-07-12 20:03:41.167343 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-12 20:03:41.167353 | orchestrator | Saturday 12 July 2025 20:00:07 +0000 (0:00:01.515) 0:02:42.118 ********* 2025-07-12 20:03:41.167362 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.167372 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.167382 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.167391 | orchestrator | 2025-07-12 20:03:41.167401 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-12 20:03:41.167411 | orchestrator | Saturday 12 July 2025 20:00:08 +0000 (0:00:01.874) 0:02:43.992 ********* 2025-07-12 20:03:41.167420 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.167430 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.167449 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.167459 | orchestrator | 2025-07-12 20:03:41.167469 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-12 20:03:41.167479 | orchestrator | Saturday 12 July 2025 20:00:09 +0000 (0:00:00.274) 0:02:44.267 ********* 2025-07-12 20:03:41.167488 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.167498 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.167508 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.167517 | orchestrator | 2025-07-12 20:03:41.167527 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-12 20:03:41.167537 | orchestrator | Saturday 12 July 2025 20:00:09 +0000 (0:00:00.266) 0:02:44.533 ********* 2025-07-12 20:03:41.167551 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.167561 | orchestrator | 2025-07-12 20:03:41.167571 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-12 20:03:41.167581 | orchestrator | Saturday 12 July 2025 20:00:10 +0000 (0:00:01.036) 0:02:45.569 ********* 2025-07-12 20:03:41.167597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:03:41.167609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:03:41.167619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:03:41.167630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:03:41.167647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:03:41.167662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:03:41.167679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:03:41.167690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:03:41.167700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:03:41.167711 | orchestrator | 2025-07-12 20:03:41.167721 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-12 20:03:41.167737 | orchestrator | Saturday 12 July 2025 20:00:13 +0000 (0:00:02.792) 0:02:48.362 ********* 2025-07-12 20:03:41.167747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:03:41.167763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:03:41.167780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:03:41.167790 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.167800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:03:41.167811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:03:41.167827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:03:41.167837 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.167853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:03:41.167869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:03:41.167880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:03:41.167890 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.167900 | orchestrator | 2025-07-12 20:03:41.167910 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-12 20:03:41.167920 | orchestrator | Saturday 12 July 2025 20:00:13 +0000 (0:00:00.561) 0:02:48.923 ********* 2025-07-12 20:03:41.167930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:03:41.167941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:03:41.167998 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.168009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:03:41.168020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:03:41.168030 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.168039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:03:41.168050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:03:41.168059 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.168069 | orchestrator | 2025-07-12 20:03:41.168079 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-12 20:03:41.168089 | orchestrator | Saturday 12 July 2025 20:00:14 +0000 (0:00:00.838) 0:02:49.762 ********* 2025-07-12 20:03:41.168099 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.168108 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.168118 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.168127 | orchestrator | 2025-07-12 20:03:41.168137 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-12 20:03:41.168147 | orchestrator | Saturday 12 July 2025 20:00:16 +0000 (0:00:01.283) 0:02:51.045 ********* 2025-07-12 20:03:41.168156 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.168166 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.168175 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.168185 | orchestrator | 2025-07-12 20:03:41.168200 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-12 20:03:41.168210 | orchestrator | Saturday 12 July 2025 20:00:17 +0000 (0:00:01.864) 0:02:52.910 ********* 2025-07-12 20:03:41.168219 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.168229 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.168239 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.168248 | orchestrator | 2025-07-12 20:03:41.168258 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-12 20:03:41.168267 | orchestrator | Saturday 12 July 2025 20:00:18 +0000 (0:00:00.291) 0:02:53.202 ********* 2025-07-12 20:03:41.168277 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.168286 | orchestrator | 2025-07-12 20:03:41.168296 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-12 20:03:41.168306 | orchestrator | Saturday 12 July 2025 20:00:19 +0000 (0:00:01.098) 0:02:54.301 ********* 2025-07-12 20:03:41.168323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:03:41.168344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:03:41.168355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:03:41.168399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168416 | orchestrator | 2025-07-12 20:03:41.168426 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-12 20:03:41.168436 | orchestrator | Saturday 12 July 2025 20:00:22 +0000 (0:00:03.064) 0:02:57.365 ********* 2025-07-12 20:03:41.168446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:03:41.168457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168467 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.168482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:03:41.168498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168509 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.168519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:03:41.168535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168545 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.168555 | orchestrator | 2025-07-12 20:03:41.168565 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-12 20:03:41.168575 | orchestrator | Saturday 12 July 2025 20:00:22 +0000 (0:00:00.576) 0:02:57.942 ********* 2025-07-12 20:03:41.168585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:03:41.168595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:03:41.168605 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.168614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:03:41.168624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:03:41.168634 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.168644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:03:41.168654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:03:41.168664 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.168673 | orchestrator | 2025-07-12 20:03:41.168683 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-12 20:03:41.168699 | orchestrator | Saturday 12 July 2025 20:00:24 +0000 (0:00:01.113) 0:02:59.056 ********* 2025-07-12 20:03:41.168709 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.168719 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.168729 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.168738 | orchestrator | 2025-07-12 20:03:41.168748 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-12 20:03:41.168763 | orchestrator | Saturday 12 July 2025 20:00:25 +0000 (0:00:01.320) 0:03:00.376 ********* 2025-07-12 20:03:41.168773 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.168783 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.168792 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.168802 | orchestrator | 2025-07-12 20:03:41.168812 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-12 20:03:41.168821 | orchestrator | Saturday 12 July 2025 20:00:27 +0000 (0:00:01.883) 0:03:02.260 ********* 2025-07-12 20:03:41.168836 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.168846 | orchestrator | 2025-07-12 20:03:41.168856 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-12 20:03:41.168866 | orchestrator | Saturday 12 July 2025 20:00:28 +0000 (0:00:00.941) 0:03:03.202 ********* 2025-07-12 20:03:41.168876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 20:03:41.168887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 20:03:41.168922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.168944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 20:03:41.169036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169084 | orchestrator | 2025-07-12 20:03:41.169094 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-12 20:03:41.169104 | orchestrator | Saturday 12 July 2025 20:00:31 +0000 (0:00:03.343) 0:03:06.545 ********* 2025-07-12 20:03:41.169114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 20:03:41.169125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169161 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.169175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 20:03:41.169191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 20:03:41.169232 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.169243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.169477 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.169488 | orchestrator | 2025-07-12 20:03:41.169498 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-12 20:03:41.169507 | orchestrator | Saturday 12 July 2025 20:00:32 +0000 (0:00:00.572) 0:03:07.118 ********* 2025-07-12 20:03:41.169516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:03:41.169524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:03:41.169533 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.169546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:03:41.169554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:03:41.169563 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.169571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:03:41.169579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:03:41.169587 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.169596 | orchestrator | 2025-07-12 20:03:41.169604 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-12 20:03:41.169612 | orchestrator | Saturday 12 July 2025 20:00:32 +0000 (0:00:00.772) 0:03:07.890 ********* 2025-07-12 20:03:41.169621 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.169629 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.169637 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.169646 | orchestrator | 2025-07-12 20:03:41.169660 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-12 20:03:41.169668 | orchestrator | Saturday 12 July 2025 20:00:34 +0000 (0:00:01.518) 0:03:09.409 ********* 2025-07-12 20:03:41.169676 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.169684 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.169692 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.169700 | orchestrator | 2025-07-12 20:03:41.169708 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-12 20:03:41.169716 | orchestrator | Saturday 12 July 2025 20:00:36 +0000 (0:00:02.146) 0:03:11.556 ********* 2025-07-12 20:03:41.169724 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.169732 | orchestrator | 2025-07-12 20:03:41.169740 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-12 20:03:41.169748 | orchestrator | Saturday 12 July 2025 20:00:37 +0000 (0:00:01.049) 0:03:12.606 ********* 2025-07-12 20:03:41.169756 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:03:41.169764 | orchestrator | 2025-07-12 20:03:41.169772 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-12 20:03:41.169780 | orchestrator | Saturday 12 July 2025 20:00:40 +0000 (0:00:02.965) 0:03:15.571 ********* 2025-07-12 20:03:41.169846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:03:41.169860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:03:41.169878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:03:41.169891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:03:41.169899 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.169908 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.170005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:03:41.170072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:03:41.170082 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.170091 | orchestrator | 2025-07-12 20:03:41.170099 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-12 20:03:41.170107 | orchestrator | Saturday 12 July 2025 20:00:42 +0000 (0:00:02.117) 0:03:17.689 ********* 2025-07-12 20:03:41.170124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:03:41.170217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:03:41.170244 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.170258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:03:41.170281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:03:41.170294 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.170388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:03:41.170415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:03:41.170438 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.170452 | orchestrator | 2025-07-12 20:03:41.170465 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-12 20:03:41.170478 | orchestrator | Saturday 12 July 2025 20:00:44 +0000 (0:00:01.811) 0:03:19.501 ********* 2025-07-12 20:03:41.170491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:03:41.170505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:03:41.170519 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.170533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:03:41.170552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:03:41.170566 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.170668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:03:41.170689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:03:41.170720 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.170733 | orchestrator | 2025-07-12 20:03:41.170747 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-12 20:03:41.170761 | orchestrator | Saturday 12 July 2025 20:00:46 +0000 (0:00:02.271) 0:03:21.773 ********* 2025-07-12 20:03:41.170774 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.170788 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.170801 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.170815 | orchestrator | 2025-07-12 20:03:41.170827 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-12 20:03:41.170835 | orchestrator | Saturday 12 July 2025 20:00:48 +0000 (0:00:02.122) 0:03:23.895 ********* 2025-07-12 20:03:41.170843 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.170851 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.170859 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.170867 | orchestrator | 2025-07-12 20:03:41.170875 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-12 20:03:41.170883 | orchestrator | Saturday 12 July 2025 20:00:50 +0000 (0:00:01.420) 0:03:25.315 ********* 2025-07-12 20:03:41.170891 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.170898 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.170906 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.170914 | orchestrator | 2025-07-12 20:03:41.170922 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-12 20:03:41.170930 | orchestrator | Saturday 12 July 2025 20:00:50 +0000 (0:00:00.316) 0:03:25.631 ********* 2025-07-12 20:03:41.170938 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.170946 | orchestrator | 2025-07-12 20:03:41.171012 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-12 20:03:41.171020 | orchestrator | Saturday 12 July 2025 20:00:51 +0000 (0:00:01.037) 0:03:26.668 ********* 2025-07-12 20:03:41.171029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 20:03:41.171049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 20:03:41.171167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 20:03:41.171196 | orchestrator | 2025-07-12 20:03:41.171205 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-12 20:03:41.171213 | orchestrator | Saturday 12 July 2025 20:00:53 +0000 (0:00:01.615) 0:03:28.284 ********* 2025-07-12 20:03:41.171224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 20:03:41.171238 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.171252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 20:03:41.171266 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.171280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 20:03:41.171294 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.171305 | orchestrator | 2025-07-12 20:03:41.171316 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-12 20:03:41.171327 | orchestrator | Saturday 12 July 2025 20:00:53 +0000 (0:00:00.347) 0:03:28.631 ********* 2025-07-12 20:03:41.171346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 20:03:41.171366 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.171378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 20:03:41.171389 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.171470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 20:03:41.171481 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.171488 | orchestrator | 2025-07-12 20:03:41.171500 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-12 20:03:41.171511 | orchestrator | Saturday 12 July 2025 20:00:54 +0000 (0:00:00.606) 0:03:29.238 ********* 2025-07-12 20:03:41.171522 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.171534 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.171545 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.171557 | orchestrator | 2025-07-12 20:03:41.171568 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-12 20:03:41.171580 | orchestrator | Saturday 12 July 2025 20:00:54 +0000 (0:00:00.626) 0:03:29.865 ********* 2025-07-12 20:03:41.171591 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.171602 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.171613 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.171625 | orchestrator | 2025-07-12 20:03:41.171636 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-12 20:03:41.171648 | orchestrator | Saturday 12 July 2025 20:00:55 +0000 (0:00:01.059) 0:03:30.924 ********* 2025-07-12 20:03:41.171659 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.171671 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.171683 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.171696 | orchestrator | 2025-07-12 20:03:41.171707 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-12 20:03:41.171719 | orchestrator | Saturday 12 July 2025 20:00:56 +0000 (0:00:00.285) 0:03:31.210 ********* 2025-07-12 20:03:41.171730 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.171741 | orchestrator | 2025-07-12 20:03:41.171752 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-12 20:03:41.171764 | orchestrator | Saturday 12 July 2025 20:00:57 +0000 (0:00:01.223) 0:03:32.433 ********* 2025-07-12 20:03:41.171777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:03:41.171790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.171812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.171903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.171920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:03:41.171978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.171994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:03:41.172138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.172157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:03:41.172475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.172495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.172608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.172733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.172906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:03:41.172929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.172943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.173085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.173119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:03:41.173237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.173323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.173336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.173369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:03:41.173471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.173494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.173573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.173585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173597 | orchestrator | 2025-07-12 20:03:41.173609 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-12 20:03:41.173627 | orchestrator | Saturday 12 July 2025 20:01:01 +0000 (0:00:04.160) 0:03:36.594 ********* 2025-07-12 20:03:41.173719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:03:41.173735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:03:41.173795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:03:41.173869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.173894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.173902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.173913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:03:41.174081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.174088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.174255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:03:41.174273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.174290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.174401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174453 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.174464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.174558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:03:41.174579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.174588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174610 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.174620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.174710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:03:41.174743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:03:41.174809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:03:41.174820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.174831 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.174842 | orchestrator | 2025-07-12 20:03:41.174851 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-12 20:03:41.174858 | orchestrator | Saturday 12 July 2025 20:01:03 +0000 (0:00:01.731) 0:03:38.325 ********* 2025-07-12 20:03:41.174865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:03:41.174872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:03:41.174878 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.174885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:03:41.174891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:03:41.174897 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.174904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:03:41.174910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:03:41.174916 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.174923 | orchestrator | 2025-07-12 20:03:41.174929 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-12 20:03:41.174935 | orchestrator | Saturday 12 July 2025 20:01:05 +0000 (0:00:02.341) 0:03:40.667 ********* 2025-07-12 20:03:41.174970 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.174979 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.174985 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.174992 | orchestrator | 2025-07-12 20:03:41.174998 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-12 20:03:41.175004 | orchestrator | Saturday 12 July 2025 20:01:07 +0000 (0:00:01.434) 0:03:42.102 ********* 2025-07-12 20:03:41.175015 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.175021 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.175027 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.175034 | orchestrator | 2025-07-12 20:03:41.175040 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-12 20:03:41.175046 | orchestrator | Saturday 12 July 2025 20:01:09 +0000 (0:00:02.028) 0:03:44.130 ********* 2025-07-12 20:03:41.175052 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.175059 | orchestrator | 2025-07-12 20:03:41.175065 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-12 20:03:41.175071 | orchestrator | Saturday 12 July 2025 20:01:10 +0000 (0:00:01.164) 0:03:45.294 ********* 2025-07-12 20:03:41.175100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.175112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.175123 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.175141 | orchestrator | 2025-07-12 20:03:41.175152 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-12 20:03:41.175162 | orchestrator | Saturday 12 July 2025 20:01:13 +0000 (0:00:03.602) 0:03:48.897 ********* 2025-07-12 20:03:41.175178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.175189 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.175228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.175241 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.175253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.175265 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.175277 | orchestrator | 2025-07-12 20:03:41.175288 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-12 20:03:41.175299 | orchestrator | Saturday 12 July 2025 20:01:14 +0000 (0:00:00.493) 0:03:49.391 ********* 2025-07-12 20:03:41.175311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175336 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.175348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175374 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.175382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175397 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.175404 | orchestrator | 2025-07-12 20:03:41.175412 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-12 20:03:41.175419 | orchestrator | Saturday 12 July 2025 20:01:15 +0000 (0:00:00.750) 0:03:50.141 ********* 2025-07-12 20:03:41.175427 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.175434 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.175440 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.175447 | orchestrator | 2025-07-12 20:03:41.175453 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-12 20:03:41.175464 | orchestrator | Saturday 12 July 2025 20:01:16 +0000 (0:00:01.641) 0:03:51.783 ********* 2025-07-12 20:03:41.175470 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.175476 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.175483 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.175489 | orchestrator | 2025-07-12 20:03:41.175495 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-12 20:03:41.175502 | orchestrator | Saturday 12 July 2025 20:01:18 +0000 (0:00:02.068) 0:03:53.851 ********* 2025-07-12 20:03:41.175508 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.175514 | orchestrator | 2025-07-12 20:03:41.175521 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-12 20:03:41.175527 | orchestrator | Saturday 12 July 2025 20:01:20 +0000 (0:00:01.236) 0:03:55.087 ********* 2025-07-12 20:03:41.175558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.175568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.175613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.175639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175652 | orchestrator | 2025-07-12 20:03:41.175659 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-12 20:03:41.175679 | orchestrator | Saturday 12 July 2025 20:01:24 +0000 (0:00:04.758) 0:03:59.845 ********* 2025-07-12 20:03:41.175708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.175716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175734 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.175741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.175749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175775 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.175815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.175836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.175850 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.175857 | orchestrator | 2025-07-12 20:03:41.175863 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-12 20:03:41.175869 | orchestrator | Saturday 12 July 2025 20:01:25 +0000 (0:00:01.014) 0:04:00.860 ********* 2025-07-12 20:03:41.175876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.175913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:03:41.175983 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.175996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:03:41.176002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:03:41.176009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:03:41.176015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:03:41.176022 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.176028 | orchestrator | 2025-07-12 20:03:41.176034 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-12 20:03:41.176041 | orchestrator | Saturday 12 July 2025 20:01:26 +0000 (0:00:00.941) 0:04:01.801 ********* 2025-07-12 20:03:41.176047 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.176053 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.176060 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.176066 | orchestrator | 2025-07-12 20:03:41.176072 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-12 20:03:41.176079 | orchestrator | Saturday 12 July 2025 20:01:28 +0000 (0:00:01.656) 0:04:03.457 ********* 2025-07-12 20:03:41.176085 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.176091 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.176098 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.176104 | orchestrator | 2025-07-12 20:03:41.176110 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-12 20:03:41.176117 | orchestrator | Saturday 12 July 2025 20:01:30 +0000 (0:00:02.270) 0:04:05.728 ********* 2025-07-12 20:03:41.176123 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.176129 | orchestrator | 2025-07-12 20:03:41.176136 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-12 20:03:41.176142 | orchestrator | Saturday 12 July 2025 20:01:32 +0000 (0:00:02.016) 0:04:07.744 ********* 2025-07-12 20:03:41.176149 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-12 20:03:41.176155 | orchestrator | 2025-07-12 20:03:41.176162 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-12 20:03:41.176168 | orchestrator | Saturday 12 July 2025 20:01:33 +0000 (0:00:01.071) 0:04:08.816 ********* 2025-07-12 20:03:41.176175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 20:03:41.176188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 20:03:41.176202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 20:03:41.176213 | orchestrator | 2025-07-12 20:03:41.176251 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-12 20:03:41.176264 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:03.797) 0:04:12.614 ********* 2025-07-12 20:03:41.176274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176285 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.176296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176307 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.176318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176329 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.176340 | orchestrator | 2025-07-12 20:03:41.176351 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-12 20:03:41.176362 | orchestrator | Saturday 12 July 2025 20:01:38 +0000 (0:00:01.211) 0:04:13.826 ********* 2025-07-12 20:03:41.176376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:03:41.176387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:03:41.176398 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.176409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:03:41.176420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:03:41.176430 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.176450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:03:41.176467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:03:41.176477 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.176488 | orchestrator | 2025-07-12 20:03:41.176498 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 20:03:41.176509 | orchestrator | Saturday 12 July 2025 20:01:40 +0000 (0:00:01.833) 0:04:15.660 ********* 2025-07-12 20:03:41.176519 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.176529 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.176540 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.176550 | orchestrator | 2025-07-12 20:03:41.176561 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 20:03:41.176567 | orchestrator | Saturday 12 July 2025 20:01:43 +0000 (0:00:02.391) 0:04:18.051 ********* 2025-07-12 20:03:41.176574 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.176580 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.176588 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.176599 | orchestrator | 2025-07-12 20:03:41.176644 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-12 20:03:41.176659 | orchestrator | Saturday 12 July 2025 20:01:45 +0000 (0:00:02.961) 0:04:21.012 ********* 2025-07-12 20:03:41.176670 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-12 20:03:41.176681 | orchestrator | 2025-07-12 20:03:41.176691 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-12 20:03:41.176703 | orchestrator | Saturday 12 July 2025 20:01:46 +0000 (0:00:00.876) 0:04:21.888 ********* 2025-07-12 20:03:41.176715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176727 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.176738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176748 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.176759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176770 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.176788 | orchestrator | 2025-07-12 20:03:41.176800 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-12 20:03:41.176812 | orchestrator | Saturday 12 July 2025 20:01:48 +0000 (0:00:01.474) 0:04:23.362 ********* 2025-07-12 20:03:41.176823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176833 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.176850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176861 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.176873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:03:41.176883 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.176894 | orchestrator | 2025-07-12 20:03:41.176936 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-12 20:03:41.176982 | orchestrator | Saturday 12 July 2025 20:01:49 +0000 (0:00:01.533) 0:04:24.896 ********* 2025-07-12 20:03:41.176995 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.177005 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.177017 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.177028 | orchestrator | 2025-07-12 20:03:41.177039 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 20:03:41.177049 | orchestrator | Saturday 12 July 2025 20:01:51 +0000 (0:00:01.265) 0:04:26.162 ********* 2025-07-12 20:03:41.177059 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.177071 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.177081 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.177092 | orchestrator | 2025-07-12 20:03:41.177103 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 20:03:41.177191 | orchestrator | Saturday 12 July 2025 20:01:53 +0000 (0:00:02.368) 0:04:28.530 ********* 2025-07-12 20:03:41.177203 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.177214 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.177225 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.177235 | orchestrator | 2025-07-12 20:03:41.177246 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-12 20:03:41.177257 | orchestrator | Saturday 12 July 2025 20:01:56 +0000 (0:00:03.152) 0:04:31.683 ********* 2025-07-12 20:03:41.177267 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-12 20:03:41.177278 | orchestrator | 2025-07-12 20:03:41.177289 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-12 20:03:41.177310 | orchestrator | Saturday 12 July 2025 20:01:57 +0000 (0:00:00.845) 0:04:32.528 ********* 2025-07-12 20:03:41.177322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:03:41.177334 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.177344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:03:41.177355 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.177366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:03:41.177379 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.177390 | orchestrator | 2025-07-12 20:03:41.177401 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-12 20:03:41.177411 | orchestrator | Saturday 12 July 2025 20:01:58 +0000 (0:00:00.897) 0:04:33.426 ********* 2025-07-12 20:03:41.177422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:03:41.177433 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.177488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:03:41.177498 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.177504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:03:41.177518 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.177524 | orchestrator | 2025-07-12 20:03:41.177530 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-12 20:03:41.177537 | orchestrator | Saturday 12 July 2025 20:01:59 +0000 (0:00:01.062) 0:04:34.488 ********* 2025-07-12 20:03:41.177543 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.177552 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.177562 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.177572 | orchestrator | 2025-07-12 20:03:41.177583 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 20:03:41.177594 | orchestrator | Saturday 12 July 2025 20:02:00 +0000 (0:00:01.427) 0:04:35.915 ********* 2025-07-12 20:03:41.177605 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.177615 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.177626 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.177637 | orchestrator | 2025-07-12 20:03:41.177648 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 20:03:41.177658 | orchestrator | Saturday 12 July 2025 20:02:03 +0000 (0:00:02.139) 0:04:38.055 ********* 2025-07-12 20:03:41.177669 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.177679 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.177690 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.177700 | orchestrator | 2025-07-12 20:03:41.177710 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-12 20:03:41.177721 | orchestrator | Saturday 12 July 2025 20:02:05 +0000 (0:00:02.741) 0:04:40.797 ********* 2025-07-12 20:03:41.177770 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.177780 | orchestrator | 2025-07-12 20:03:41.177788 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-12 20:03:41.177799 | orchestrator | Saturday 12 July 2025 20:02:07 +0000 (0:00:01.239) 0:04:42.036 ********* 2025-07-12 20:03:41.177810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.177846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:03:41.177858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.177917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.178048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.178058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:03:41.178069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.178202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.178215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:03:41.178226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.178259 | orchestrator | 2025-07-12 20:03:41.178269 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-12 20:03:41.178286 | orchestrator | Saturday 12 July 2025 20:02:10 +0000 (0:00:03.474) 0:04:45.511 ********* 2025-07-12 20:03:41.178327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.178340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:03:41.178350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.178379 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.178397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.178437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:03:41.178449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.178468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:03:41.178491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.178509 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.178543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:03:41.178563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:03:41.178572 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.178581 | orchestrator | 2025-07-12 20:03:41.178590 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-12 20:03:41.178600 | orchestrator | Saturday 12 July 2025 20:02:11 +0000 (0:00:00.720) 0:04:46.231 ********* 2025-07-12 20:03:41.178609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:03:41.178619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:03:41.178628 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.178637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:03:41.178647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:03:41.178656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.178665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:03:41.178674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:03:41.178690 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.178697 | orchestrator | 2025-07-12 20:03:41.178702 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-12 20:03:41.178708 | orchestrator | Saturday 12 July 2025 20:02:12 +0000 (0:00:00.938) 0:04:47.170 ********* 2025-07-12 20:03:41.178714 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.178719 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.178724 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.178730 | orchestrator | 2025-07-12 20:03:41.178735 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-12 20:03:41.178741 | orchestrator | Saturday 12 July 2025 20:02:13 +0000 (0:00:01.771) 0:04:48.942 ********* 2025-07-12 20:03:41.178750 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.178755 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.178761 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.178766 | orchestrator | 2025-07-12 20:03:41.178772 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-12 20:03:41.178777 | orchestrator | Saturday 12 July 2025 20:02:16 +0000 (0:00:02.116) 0:04:51.058 ********* 2025-07-12 20:03:41.178782 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.178788 | orchestrator | 2025-07-12 20:03:41.178793 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-12 20:03:41.178799 | orchestrator | Saturday 12 July 2025 20:02:17 +0000 (0:00:01.325) 0:04:52.384 ********* 2025-07-12 20:03:41.178821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:03:41.178829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:03:41.178835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:03:41.178848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:03:41.178872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:03:41.178880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:03:41.178886 | orchestrator | 2025-07-12 20:03:41.178892 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-12 20:03:41.178898 | orchestrator | Saturday 12 July 2025 20:02:22 +0000 (0:00:05.421) 0:04:57.805 ********* 2025-07-12 20:03:41.178904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:03:41.178917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:03:41.178923 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.178943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:03:41.178965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:03:41.178972 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.178978 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:03:41.178988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:03:41.178994 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.179000 | orchestrator | 2025-07-12 20:03:41.179005 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-12 20:03:41.179014 | orchestrator | Saturday 12 July 2025 20:02:24 +0000 (0:00:01.373) 0:04:59.178 ********* 2025-07-12 20:03:41.179020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 20:03:41.179026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:03:41.179032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:03:41.179052 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.179059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 20:03:41.179064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:03:41.179070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:03:41.179076 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.179082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 20:03:41.179087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:03:41.179093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:03:41.179103 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.179108 | orchestrator | 2025-07-12 20:03:41.179114 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-12 20:03:41.179119 | orchestrator | Saturday 12 July 2025 20:02:25 +0000 (0:00:01.014) 0:05:00.193 ********* 2025-07-12 20:03:41.179125 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.179131 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.179136 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.179142 | orchestrator | 2025-07-12 20:03:41.179147 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-12 20:03:41.179153 | orchestrator | Saturday 12 July 2025 20:02:25 +0000 (0:00:00.499) 0:05:00.693 ********* 2025-07-12 20:03:41.179158 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.179164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.179169 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.179175 | orchestrator | 2025-07-12 20:03:41.179180 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-12 20:03:41.179186 | orchestrator | Saturday 12 July 2025 20:02:27 +0000 (0:00:01.422) 0:05:02.115 ********* 2025-07-12 20:03:41.179191 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.179197 | orchestrator | 2025-07-12 20:03:41.179202 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-12 20:03:41.179208 | orchestrator | Saturday 12 July 2025 20:02:28 +0000 (0:00:01.665) 0:05:03.781 ********* 2025-07-12 20:03:41.179214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:03:41.179223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:03:41.179243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:03:41.179266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:03:41.179278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:03:41.179344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:03:41.179354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:03:41.179402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:03:41.179418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:03:41.179461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:03:41.179477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:03:41.179507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:03:41.179521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179546 | orchestrator | 2025-07-12 20:03:41.179552 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-12 20:03:41.179557 | orchestrator | Saturday 12 July 2025 20:02:33 +0000 (0:00:04.305) 0:05:08.086 ********* 2025-07-12 20:03:41.179563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:03:41.179569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:03:41.179574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:03:41.179613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:03:41.179619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179636 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.179644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:03:41.179655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:03:41.179664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:03:41.179687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:03:41.179702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:03:41.179723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179729 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.179734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:03:41.179740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:03:41.179797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:03:41.179806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:03:41.179825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:03:41.179839 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.179849 | orchestrator | 2025-07-12 20:03:41.179857 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-12 20:03:41.179867 | orchestrator | Saturday 12 July 2025 20:02:34 +0000 (0:00:01.211) 0:05:09.298 ********* 2025-07-12 20:03:41.179876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 20:03:41.179890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 20:03:41.179901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:03:41.179914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:03:41.179925 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.179934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 20:03:41.179943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 20:03:41.179998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:03:41.180008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:03:41.180017 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 20:03:41.180037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 20:03:41.180046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:03:41.180055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:03:41.180065 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180074 | orchestrator | 2025-07-12 20:03:41.180083 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-12 20:03:41.180099 | orchestrator | Saturday 12 July 2025 20:02:35 +0000 (0:00:01.051) 0:05:10.349 ********* 2025-07-12 20:03:41.180108 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180117 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180125 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180132 | orchestrator | 2025-07-12 20:03:41.180141 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-12 20:03:41.180151 | orchestrator | Saturday 12 July 2025 20:02:35 +0000 (0:00:00.489) 0:05:10.839 ********* 2025-07-12 20:03:41.180159 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180172 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180181 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180189 | orchestrator | 2025-07-12 20:03:41.180197 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-12 20:03:41.180206 | orchestrator | Saturday 12 July 2025 20:02:37 +0000 (0:00:01.713) 0:05:12.553 ********* 2025-07-12 20:03:41.180214 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.180222 | orchestrator | 2025-07-12 20:03:41.180230 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-12 20:03:41.180237 | orchestrator | Saturday 12 July 2025 20:02:39 +0000 (0:00:01.739) 0:05:14.292 ********* 2025-07-12 20:03:41.180255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:03:41.180267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:03:41.180277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:03:41.180292 | orchestrator | 2025-07-12 20:03:41.180302 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-12 20:03:41.180310 | orchestrator | Saturday 12 July 2025 20:02:41 +0000 (0:00:02.667) 0:05:16.960 ********* 2025-07-12 20:03:41.180318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 20:03:41.180327 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 20:03:41.180354 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 20:03:41.180371 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180380 | orchestrator | 2025-07-12 20:03:41.180388 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-12 20:03:41.180396 | orchestrator | Saturday 12 July 2025 20:02:42 +0000 (0:00:00.448) 0:05:17.408 ********* 2025-07-12 20:03:41.180411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 20:03:41.180420 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 20:03:41.180436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 20:03:41.180452 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180460 | orchestrator | 2025-07-12 20:03:41.180468 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-12 20:03:41.180477 | orchestrator | Saturday 12 July 2025 20:02:43 +0000 (0:00:01.106) 0:05:18.515 ********* 2025-07-12 20:03:41.180484 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180492 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180500 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180507 | orchestrator | 2025-07-12 20:03:41.180512 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-12 20:03:41.180517 | orchestrator | Saturday 12 July 2025 20:02:43 +0000 (0:00:00.493) 0:05:19.008 ********* 2025-07-12 20:03:41.180522 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180527 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180531 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180536 | orchestrator | 2025-07-12 20:03:41.180541 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-12 20:03:41.180546 | orchestrator | Saturday 12 July 2025 20:02:45 +0000 (0:00:01.378) 0:05:20.387 ********* 2025-07-12 20:03:41.180551 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:03:41.180555 | orchestrator | 2025-07-12 20:03:41.180560 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-12 20:03:41.180565 | orchestrator | Saturday 12 July 2025 20:02:47 +0000 (0:00:01.723) 0:05:22.111 ********* 2025-07-12 20:03:41.180575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.180590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.180606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.180615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.180628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.180641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 20:03:41.180650 | orchestrator | 2025-07-12 20:03:41.180657 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-12 20:03:41.180665 | orchestrator | Saturday 12 July 2025 20:02:53 +0000 (0:00:06.157) 0:05:28.268 ********* 2025-07-12 20:03:41.180680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.180686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.180691 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.180706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.180712 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.180726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 20:03:41.180731 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180736 | orchestrator | 2025-07-12 20:03:41.180741 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-12 20:03:41.180746 | orchestrator | Saturday 12 July 2025 20:02:53 +0000 (0:00:00.708) 0:05:28.976 ********* 2025-07-12 20:03:41.180751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180785 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180856 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:03:41.180866 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.180871 | orchestrator | 2025-07-12 20:03:41.180876 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-12 20:03:41.180881 | orchestrator | Saturday 12 July 2025 20:02:55 +0000 (0:00:01.898) 0:05:30.874 ********* 2025-07-12 20:03:41.180886 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.180891 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.180896 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.180901 | orchestrator | 2025-07-12 20:03:41.180905 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-12 20:03:41.180910 | orchestrator | Saturday 12 July 2025 20:02:57 +0000 (0:00:01.237) 0:05:32.112 ********* 2025-07-12 20:03:41.180915 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.180922 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.180930 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.180938 | orchestrator | 2025-07-12 20:03:41.180961 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-12 20:03:41.180970 | orchestrator | Saturday 12 July 2025 20:02:59 +0000 (0:00:02.055) 0:05:34.168 ********* 2025-07-12 20:03:41.180978 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.180987 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.180995 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181003 | orchestrator | 2025-07-12 20:03:41.181011 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-12 20:03:41.181020 | orchestrator | Saturday 12 July 2025 20:02:59 +0000 (0:00:00.335) 0:05:34.503 ********* 2025-07-12 20:03:41.181028 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181036 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181044 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181053 | orchestrator | 2025-07-12 20:03:41.181060 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-12 20:03:41.181069 | orchestrator | Saturday 12 July 2025 20:03:00 +0000 (0:00:00.652) 0:05:35.156 ********* 2025-07-12 20:03:41.181079 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181088 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181096 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181104 | orchestrator | 2025-07-12 20:03:41.181113 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-12 20:03:41.181121 | orchestrator | Saturday 12 July 2025 20:03:00 +0000 (0:00:00.329) 0:05:35.486 ********* 2025-07-12 20:03:41.181129 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181137 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181145 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181153 | orchestrator | 2025-07-12 20:03:41.181161 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-12 20:03:41.181175 | orchestrator | Saturday 12 July 2025 20:03:00 +0000 (0:00:00.321) 0:05:35.807 ********* 2025-07-12 20:03:41.181183 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181199 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181208 | orchestrator | 2025-07-12 20:03:41.181217 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-12 20:03:41.181224 | orchestrator | Saturday 12 July 2025 20:03:01 +0000 (0:00:00.310) 0:05:36.118 ********* 2025-07-12 20:03:41.181233 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181239 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181244 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181249 | orchestrator | 2025-07-12 20:03:41.181253 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-12 20:03:41.181258 | orchestrator | Saturday 12 July 2025 20:03:01 +0000 (0:00:00.825) 0:05:36.943 ********* 2025-07-12 20:03:41.181263 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181268 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181273 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181278 | orchestrator | 2025-07-12 20:03:41.181286 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-12 20:03:41.181291 | orchestrator | Saturday 12 July 2025 20:03:02 +0000 (0:00:00.665) 0:05:37.608 ********* 2025-07-12 20:03:41.181296 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181301 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181306 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181310 | orchestrator | 2025-07-12 20:03:41.181315 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-12 20:03:41.181320 | orchestrator | Saturday 12 July 2025 20:03:02 +0000 (0:00:00.341) 0:05:37.950 ********* 2025-07-12 20:03:41.181325 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181329 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181334 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181339 | orchestrator | 2025-07-12 20:03:41.181344 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-12 20:03:41.181349 | orchestrator | Saturday 12 July 2025 20:03:04 +0000 (0:00:01.234) 0:05:39.184 ********* 2025-07-12 20:03:41.181353 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181358 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181367 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181372 | orchestrator | 2025-07-12 20:03:41.181377 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-12 20:03:41.181382 | orchestrator | Saturday 12 July 2025 20:03:05 +0000 (0:00:00.915) 0:05:40.100 ********* 2025-07-12 20:03:41.181387 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181391 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181396 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181401 | orchestrator | 2025-07-12 20:03:41.181406 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-12 20:03:41.181411 | orchestrator | Saturday 12 July 2025 20:03:05 +0000 (0:00:00.922) 0:05:41.022 ********* 2025-07-12 20:03:41.181415 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.181420 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.181425 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.181430 | orchestrator | 2025-07-12 20:03:41.181435 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-12 20:03:41.181439 | orchestrator | Saturday 12 July 2025 20:03:10 +0000 (0:00:04.876) 0:05:45.898 ********* 2025-07-12 20:03:41.181444 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181449 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181457 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181465 | orchestrator | 2025-07-12 20:03:41.181473 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-12 20:03:41.181481 | orchestrator | Saturday 12 July 2025 20:03:14 +0000 (0:00:03.735) 0:05:49.634 ********* 2025-07-12 20:03:41.181495 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.181503 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.181512 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.181520 | orchestrator | 2025-07-12 20:03:41.181528 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-12 20:03:41.181536 | orchestrator | Saturday 12 July 2025 20:03:22 +0000 (0:00:08.268) 0:05:57.902 ********* 2025-07-12 20:03:41.181544 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181552 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181560 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181568 | orchestrator | 2025-07-12 20:03:41.181577 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-12 20:03:41.181584 | orchestrator | Saturday 12 July 2025 20:03:26 +0000 (0:00:03.805) 0:06:01.707 ********* 2025-07-12 20:03:41.181593 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:41.181598 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:41.181603 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:41.181607 | orchestrator | 2025-07-12 20:03:41.181612 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-12 20:03:41.181617 | orchestrator | Saturday 12 July 2025 20:03:31 +0000 (0:00:04.528) 0:06:06.235 ********* 2025-07-12 20:03:41.181622 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181627 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181632 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181637 | orchestrator | 2025-07-12 20:03:41.181642 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-12 20:03:41.181647 | orchestrator | Saturday 12 July 2025 20:03:31 +0000 (0:00:00.363) 0:06:06.599 ********* 2025-07-12 20:03:41.181652 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181661 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181668 | orchestrator | 2025-07-12 20:03:41.181676 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-12 20:03:41.181685 | orchestrator | Saturday 12 July 2025 20:03:32 +0000 (0:00:00.799) 0:06:07.398 ********* 2025-07-12 20:03:41.181693 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181702 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181718 | orchestrator | 2025-07-12 20:03:41.181727 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-12 20:03:41.181735 | orchestrator | Saturday 12 July 2025 20:03:32 +0000 (0:00:00.351) 0:06:07.750 ********* 2025-07-12 20:03:41.181743 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181751 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181759 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181767 | orchestrator | 2025-07-12 20:03:41.181776 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-12 20:03:41.181784 | orchestrator | Saturday 12 July 2025 20:03:33 +0000 (0:00:00.349) 0:06:08.099 ********* 2025-07-12 20:03:41.181794 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181801 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181810 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181815 | orchestrator | 2025-07-12 20:03:41.181820 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-12 20:03:41.181825 | orchestrator | Saturday 12 July 2025 20:03:33 +0000 (0:00:00.325) 0:06:08.425 ********* 2025-07-12 20:03:41.181830 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:41.181834 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:41.181839 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:41.181844 | orchestrator | 2025-07-12 20:03:41.181849 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-12 20:03:41.181858 | orchestrator | Saturday 12 July 2025 20:03:34 +0000 (0:00:00.730) 0:06:09.155 ********* 2025-07-12 20:03:41.181863 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181873 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181878 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181883 | orchestrator | 2025-07-12 20:03:41.181888 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-12 20:03:41.181893 | orchestrator | Saturday 12 July 2025 20:03:38 +0000 (0:00:04.851) 0:06:14.007 ********* 2025-07-12 20:03:41.181897 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:41.181903 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:41.181907 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:41.181912 | orchestrator | 2025-07-12 20:03:41.181917 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:03:41.181922 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 20:03:41.181932 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 20:03:41.181937 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 20:03:41.181942 | orchestrator | 2025-07-12 20:03:41.181962 | orchestrator | 2025-07-12 20:03:41.181971 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:03:41.181979 | orchestrator | Saturday 12 July 2025 20:03:39 +0000 (0:00:00.820) 0:06:14.828 ********* 2025-07-12 20:03:41.181987 | orchestrator | =============================================================================== 2025-07-12 20:03:41.181995 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.27s 2025-07-12 20:03:41.182003 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 7.27s 2025-07-12 20:03:41.182009 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.16s 2025-07-12 20:03:41.182047 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.42s 2025-07-12 20:03:41.182053 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.24s 2025-07-12 20:03:41.182059 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.97s 2025-07-12 20:03:41.182064 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.88s 2025-07-12 20:03:41.182069 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.85s 2025-07-12 20:03:41.182074 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.81s 2025-07-12 20:03:41.182079 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.76s 2025-07-12 20:03:41.182084 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.53s 2025-07-12 20:03:41.182089 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.44s 2025-07-12 20:03:41.182094 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.31s 2025-07-12 20:03:41.182099 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.16s 2025-07-12 20:03:41.182104 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.11s 2025-07-12 20:03:41.182109 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.04s 2025-07-12 20:03:41.182113 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.90s 2025-07-12 20:03:41.182118 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.81s 2025-07-12 20:03:41.182123 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.80s 2025-07-12 20:03:41.182128 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.79s 2025-07-12 20:03:41.182136 | orchestrator | 2025-07-12 20:03:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:44.220277 | orchestrator | 2025-07-12 20:03:44 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:44.220492 | orchestrator | 2025-07-12 20:03:44 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:03:44.221185 | orchestrator | 2025-07-12 20:03:44 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:03:44.221381 | orchestrator | 2025-07-12 20:03:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:47.258893 | orchestrator | 2025-07-12 20:03:47 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:47.259379 | orchestrator | 2025-07-12 20:03:47 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:03:47.261757 | orchestrator | 2025-07-12 20:03:47 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:03:47.261771 | orchestrator | 2025-07-12 20:03:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:50.296223 | orchestrator | 2025-07-12 20:03:50 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:50.300343 | orchestrator | 2025-07-12 20:03:50 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:03:50.307424 | orchestrator | 2025-07-12 20:03:50 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:03:50.307465 | orchestrator | 2025-07-12 20:03:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:53.340122 | orchestrator | 2025-07-12 20:03:53 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:53.341912 | orchestrator | 2025-07-12 20:03:53 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:03:53.343203 | orchestrator | 2025-07-12 20:03:53 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:03:53.343374 | orchestrator | 2025-07-12 20:03:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:56.382908 | orchestrator | 2025-07-12 20:03:56 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:56.383045 | orchestrator | 2025-07-12 20:03:56 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:03:56.383748 | orchestrator | 2025-07-12 20:03:56 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:03:56.383826 | orchestrator | 2025-07-12 20:03:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:03:59.413720 | orchestrator | 2025-07-12 20:03:59 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:03:59.414266 | orchestrator | 2025-07-12 20:03:59 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:03:59.415538 | orchestrator | 2025-07-12 20:03:59 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:03:59.415571 | orchestrator | 2025-07-12 20:03:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:02.452672 | orchestrator | 2025-07-12 20:04:02 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:02.453453 | orchestrator | 2025-07-12 20:04:02 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:02.454179 | orchestrator | 2025-07-12 20:04:02 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:02.454411 | orchestrator | 2025-07-12 20:04:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:05.496395 | orchestrator | 2025-07-12 20:04:05 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:05.497619 | orchestrator | 2025-07-12 20:04:05 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:05.498883 | orchestrator | 2025-07-12 20:04:05 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:05.499161 | orchestrator | 2025-07-12 20:04:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:08.544665 | orchestrator | 2025-07-12 20:04:08 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:08.547191 | orchestrator | 2025-07-12 20:04:08 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:08.549030 | orchestrator | 2025-07-12 20:04:08 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:08.550560 | orchestrator | 2025-07-12 20:04:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:11.599552 | orchestrator | 2025-07-12 20:04:11 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:11.600558 | orchestrator | 2025-07-12 20:04:11 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:11.602077 | orchestrator | 2025-07-12 20:04:11 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:11.602108 | orchestrator | 2025-07-12 20:04:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:14.656902 | orchestrator | 2025-07-12 20:04:14 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:14.659632 | orchestrator | 2025-07-12 20:04:14 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:14.661791 | orchestrator | 2025-07-12 20:04:14 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:14.661829 | orchestrator | 2025-07-12 20:04:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:17.712514 | orchestrator | 2025-07-12 20:04:17 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:17.714610 | orchestrator | 2025-07-12 20:04:17 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:17.715687 | orchestrator | 2025-07-12 20:04:17 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:17.715726 | orchestrator | 2025-07-12 20:04:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:20.758800 | orchestrator | 2025-07-12 20:04:20 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:20.760739 | orchestrator | 2025-07-12 20:04:20 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:20.762759 | orchestrator | 2025-07-12 20:04:20 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:20.762824 | orchestrator | 2025-07-12 20:04:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:23.804141 | orchestrator | 2025-07-12 20:04:23 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:23.804993 | orchestrator | 2025-07-12 20:04:23 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:23.806356 | orchestrator | 2025-07-12 20:04:23 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:23.806442 | orchestrator | 2025-07-12 20:04:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:26.848067 | orchestrator | 2025-07-12 20:04:26 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:26.849331 | orchestrator | 2025-07-12 20:04:26 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:26.851096 | orchestrator | 2025-07-12 20:04:26 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:26.851128 | orchestrator | 2025-07-12 20:04:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:29.900125 | orchestrator | 2025-07-12 20:04:29 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:29.901234 | orchestrator | 2025-07-12 20:04:29 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:29.906462 | orchestrator | 2025-07-12 20:04:29 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:29.906507 | orchestrator | 2025-07-12 20:04:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:32.957557 | orchestrator | 2025-07-12 20:04:32 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:32.958356 | orchestrator | 2025-07-12 20:04:32 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:32.959981 | orchestrator | 2025-07-12 20:04:32 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:32.960147 | orchestrator | 2025-07-12 20:04:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:36.006539 | orchestrator | 2025-07-12 20:04:36 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:36.010207 | orchestrator | 2025-07-12 20:04:36 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:36.012451 | orchestrator | 2025-07-12 20:04:36 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:36.012485 | orchestrator | 2025-07-12 20:04:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:39.055850 | orchestrator | 2025-07-12 20:04:39 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:39.057950 | orchestrator | 2025-07-12 20:04:39 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:39.060305 | orchestrator | 2025-07-12 20:04:39 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:39.060341 | orchestrator | 2025-07-12 20:04:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:42.116280 | orchestrator | 2025-07-12 20:04:42 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:42.118369 | orchestrator | 2025-07-12 20:04:42 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:42.120999 | orchestrator | 2025-07-12 20:04:42 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:42.121033 | orchestrator | 2025-07-12 20:04:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:45.175120 | orchestrator | 2025-07-12 20:04:45 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:45.178262 | orchestrator | 2025-07-12 20:04:45 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:45.181446 | orchestrator | 2025-07-12 20:04:45 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:45.181646 | orchestrator | 2025-07-12 20:04:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:48.231713 | orchestrator | 2025-07-12 20:04:48 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:48.233753 | orchestrator | 2025-07-12 20:04:48 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:48.236289 | orchestrator | 2025-07-12 20:04:48 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:48.236455 | orchestrator | 2025-07-12 20:04:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:51.276467 | orchestrator | 2025-07-12 20:04:51 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:51.278314 | orchestrator | 2025-07-12 20:04:51 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:51.280671 | orchestrator | 2025-07-12 20:04:51 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:51.280704 | orchestrator | 2025-07-12 20:04:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:54.328052 | orchestrator | 2025-07-12 20:04:54 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:54.331053 | orchestrator | 2025-07-12 20:04:54 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:54.332840 | orchestrator | 2025-07-12 20:04:54 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:54.332889 | orchestrator | 2025-07-12 20:04:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:04:57.389654 | orchestrator | 2025-07-12 20:04:57 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:04:57.392839 | orchestrator | 2025-07-12 20:04:57 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:04:57.394156 | orchestrator | 2025-07-12 20:04:57 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:04:57.394728 | orchestrator | 2025-07-12 20:04:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:00.451427 | orchestrator | 2025-07-12 20:05:00 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:00.452494 | orchestrator | 2025-07-12 20:05:00 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:00.454084 | orchestrator | 2025-07-12 20:05:00 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:00.454135 | orchestrator | 2025-07-12 20:05:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:03.504524 | orchestrator | 2025-07-12 20:05:03 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:03.505690 | orchestrator | 2025-07-12 20:05:03 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:03.507620 | orchestrator | 2025-07-12 20:05:03 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:03.507651 | orchestrator | 2025-07-12 20:05:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:06.554340 | orchestrator | 2025-07-12 20:05:06 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:06.555682 | orchestrator | 2025-07-12 20:05:06 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:06.556876 | orchestrator | 2025-07-12 20:05:06 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:06.556903 | orchestrator | 2025-07-12 20:05:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:09.595923 | orchestrator | 2025-07-12 20:05:09 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:09.596541 | orchestrator | 2025-07-12 20:05:09 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:09.599196 | orchestrator | 2025-07-12 20:05:09 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:09.599461 | orchestrator | 2025-07-12 20:05:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:12.645596 | orchestrator | 2025-07-12 20:05:12 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:12.647232 | orchestrator | 2025-07-12 20:05:12 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:12.648457 | orchestrator | 2025-07-12 20:05:12 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:12.648504 | orchestrator | 2025-07-12 20:05:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:15.692478 | orchestrator | 2025-07-12 20:05:15 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:15.694852 | orchestrator | 2025-07-12 20:05:15 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:15.697185 | orchestrator | 2025-07-12 20:05:15 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:15.697502 | orchestrator | 2025-07-12 20:05:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:18.738562 | orchestrator | 2025-07-12 20:05:18 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:18.740043 | orchestrator | 2025-07-12 20:05:18 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:18.741777 | orchestrator | 2025-07-12 20:05:18 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:18.742251 | orchestrator | 2025-07-12 20:05:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:21.787944 | orchestrator | 2025-07-12 20:05:21 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:21.790006 | orchestrator | 2025-07-12 20:05:21 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:21.791429 | orchestrator | 2025-07-12 20:05:21 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:21.791453 | orchestrator | 2025-07-12 20:05:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:24.828484 | orchestrator | 2025-07-12 20:05:24 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:24.828845 | orchestrator | 2025-07-12 20:05:24 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:24.830081 | orchestrator | 2025-07-12 20:05:24 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:24.830110 | orchestrator | 2025-07-12 20:05:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:27.870550 | orchestrator | 2025-07-12 20:05:27 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:27.871812 | orchestrator | 2025-07-12 20:05:27 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:27.873482 | orchestrator | 2025-07-12 20:05:27 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:27.873511 | orchestrator | 2025-07-12 20:05:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:30.921836 | orchestrator | 2025-07-12 20:05:30 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:30.927292 | orchestrator | 2025-07-12 20:05:30 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:30.928330 | orchestrator | 2025-07-12 20:05:30 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:30.929047 | orchestrator | 2025-07-12 20:05:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:33.970360 | orchestrator | 2025-07-12 20:05:33 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:33.971153 | orchestrator | 2025-07-12 20:05:33 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:33.973231 | orchestrator | 2025-07-12 20:05:33 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:33.973344 | orchestrator | 2025-07-12 20:05:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:37.034245 | orchestrator | 2025-07-12 20:05:37 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:37.038166 | orchestrator | 2025-07-12 20:05:37 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:37.038450 | orchestrator | 2025-07-12 20:05:37 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:37.038837 | orchestrator | 2025-07-12 20:05:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:40.093040 | orchestrator | 2025-07-12 20:05:40 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state STARTED 2025-07-12 20:05:40.096480 | orchestrator | 2025-07-12 20:05:40 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:40.098569 | orchestrator | 2025-07-12 20:05:40 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:40.098599 | orchestrator | 2025-07-12 20:05:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:43.163868 | orchestrator | 2025-07-12 20:05:43.164312 | orchestrator | 2025-07-12 20:05:43.164333 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-12 20:05:43.164346 | orchestrator | 2025-07-12 20:05:43.164357 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 20:05:43.164370 | orchestrator | Saturday 12 July 2025 19:54:48 +0000 (0:00:00.715) 0:00:00.715 ********* 2025-07-12 20:05:43.164383 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.164396 | orchestrator | 2025-07-12 20:05:43.164407 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 20:05:43.164418 | orchestrator | Saturday 12 July 2025 19:54:49 +0000 (0:00:01.096) 0:00:01.812 ********* 2025-07-12 20:05:43.164430 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.164442 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.164453 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.164464 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.164475 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.164486 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.164497 | orchestrator | 2025-07-12 20:05:43.164508 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 20:05:43.164519 | orchestrator | Saturday 12 July 2025 19:54:50 +0000 (0:00:01.435) 0:00:03.248 ********* 2025-07-12 20:05:43.165151 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.165176 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.165188 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.165200 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.165211 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.165223 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.165235 | orchestrator | 2025-07-12 20:05:43.165247 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 20:05:43.165258 | orchestrator | Saturday 12 July 2025 19:54:51 +0000 (0:00:00.753) 0:00:04.001 ********* 2025-07-12 20:05:43.165270 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.165281 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.165293 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.165304 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.165316 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.165765 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.165785 | orchestrator | 2025-07-12 20:05:43.165797 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 20:05:43.165809 | orchestrator | Saturday 12 July 2025 19:54:52 +0000 (0:00:01.029) 0:00:05.030 ********* 2025-07-12 20:05:43.165820 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.165832 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.165843 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.165855 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.166534 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.166553 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.166565 | orchestrator | 2025-07-12 20:05:43.166577 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 20:05:43.166588 | orchestrator | Saturday 12 July 2025 19:54:53 +0000 (0:00:00.651) 0:00:05.681 ********* 2025-07-12 20:05:43.166599 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.166610 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.166620 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.166631 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.166642 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.166653 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.166664 | orchestrator | 2025-07-12 20:05:43.166675 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 20:05:43.166686 | orchestrator | Saturday 12 July 2025 19:54:53 +0000 (0:00:00.633) 0:00:06.314 ********* 2025-07-12 20:05:43.166697 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.166707 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.166718 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.166729 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.166740 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.166751 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.166762 | orchestrator | 2025-07-12 20:05:43.166773 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 20:05:43.166785 | orchestrator | Saturday 12 July 2025 19:54:54 +0000 (0:00:00.729) 0:00:07.044 ********* 2025-07-12 20:05:43.166796 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.166808 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.166819 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.166830 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.166841 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.166852 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.167059 | orchestrator | 2025-07-12 20:05:43.167076 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 20:05:43.167087 | orchestrator | Saturday 12 July 2025 19:54:55 +0000 (0:00:00.670) 0:00:07.715 ********* 2025-07-12 20:05:43.167099 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.167110 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.167123 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.167136 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.167148 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.167160 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.167173 | orchestrator | 2025-07-12 20:05:43.167185 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 20:05:43.167198 | orchestrator | Saturday 12 July 2025 19:54:56 +0000 (0:00:00.927) 0:00:08.642 ********* 2025-07-12 20:05:43.167211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:05:43.167224 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:05:43.167238 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:05:43.167250 | orchestrator | 2025-07-12 20:05:43.167263 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 20:05:43.167324 | orchestrator | Saturday 12 July 2025 19:54:56 +0000 (0:00:00.695) 0:00:09.337 ********* 2025-07-12 20:05:43.167354 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.167382 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.167394 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.167407 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.167419 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.167432 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.167444 | orchestrator | 2025-07-12 20:05:43.167477 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 20:05:43.167488 | orchestrator | Saturday 12 July 2025 19:54:57 +0000 (0:00:01.004) 0:00:10.342 ********* 2025-07-12 20:05:43.167499 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:05:43.167511 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:05:43.167521 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:05:43.167532 | orchestrator | 2025-07-12 20:05:43.167543 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 20:05:43.167553 | orchestrator | Saturday 12 July 2025 19:55:00 +0000 (0:00:02.891) 0:00:13.233 ********* 2025-07-12 20:05:43.167563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:05:43.167573 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:05:43.167582 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:05:43.167592 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.167602 | orchestrator | 2025-07-12 20:05:43.167612 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 20:05:43.167621 | orchestrator | Saturday 12 July 2025 19:55:01 +0000 (0:00:00.845) 0:00:14.079 ********* 2025-07-12 20:05:43.167633 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167646 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167656 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167666 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.167676 | orchestrator | 2025-07-12 20:05:43.167685 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 20:05:43.167695 | orchestrator | Saturday 12 July 2025 19:55:02 +0000 (0:00:01.301) 0:00:15.380 ********* 2025-07-12 20:05:43.167707 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167721 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167731 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167748 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.167758 | orchestrator | 2025-07-12 20:05:43.167768 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 20:05:43.167777 | orchestrator | Saturday 12 July 2025 19:55:03 +0000 (0:00:00.286) 0:00:15.667 ********* 2025-07-12 20:05:43.167794 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 19:54:58.600613', 'end': '2025-07-12 19:54:58.849336', 'delta': '0:00:00.248723', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167818 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 19:54:59.562939', 'end': '2025-07-12 19:54:59.823044', 'delta': '0:00:00.260105', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 19:55:00.314844', 'end': '2025-07-12 19:55:00.604991', 'delta': '0:00:00.290147', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.167840 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.167901 | orchestrator | 2025-07-12 20:05:43.167913 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 20:05:43.167923 | orchestrator | Saturday 12 July 2025 19:55:03 +0000 (0:00:00.208) 0:00:15.875 ********* 2025-07-12 20:05:43.167933 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.167943 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.167953 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.167963 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.168045 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.168057 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.168067 | orchestrator | 2025-07-12 20:05:43.168077 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 20:05:43.168087 | orchestrator | Saturday 12 July 2025 19:55:04 +0000 (0:00:01.319) 0:00:17.195 ********* 2025-07-12 20:05:43.168097 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.168107 | orchestrator | 2025-07-12 20:05:43.168116 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 20:05:43.168126 | orchestrator | Saturday 12 July 2025 19:55:05 +0000 (0:00:00.806) 0:00:18.002 ********* 2025-07-12 20:05:43.168136 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168146 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168164 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168172 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168180 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168188 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168196 | orchestrator | 2025-07-12 20:05:43.168204 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 20:05:43.168212 | orchestrator | Saturday 12 July 2025 19:55:06 +0000 (0:00:01.403) 0:00:19.405 ********* 2025-07-12 20:05:43.168220 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168228 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168236 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168244 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168252 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168260 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168268 | orchestrator | 2025-07-12 20:05:43.168276 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:05:43.168284 | orchestrator | Saturday 12 July 2025 19:55:08 +0000 (0:00:01.245) 0:00:20.651 ********* 2025-07-12 20:05:43.168292 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168300 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168307 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168315 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168323 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168331 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168339 | orchestrator | 2025-07-12 20:05:43.168347 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 20:05:43.168355 | orchestrator | Saturday 12 July 2025 19:55:09 +0000 (0:00:00.883) 0:00:21.534 ********* 2025-07-12 20:05:43.168363 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168371 | orchestrator | 2025-07-12 20:05:43.168379 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 20:05:43.168387 | orchestrator | Saturday 12 July 2025 19:55:09 +0000 (0:00:00.097) 0:00:21.632 ********* 2025-07-12 20:05:43.168395 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168403 | orchestrator | 2025-07-12 20:05:43.168411 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:05:43.168419 | orchestrator | Saturday 12 July 2025 19:55:09 +0000 (0:00:00.206) 0:00:21.838 ********* 2025-07-12 20:05:43.168427 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168435 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168443 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168456 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168464 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168472 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168480 | orchestrator | 2025-07-12 20:05:43.168488 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 20:05:43.168501 | orchestrator | Saturday 12 July 2025 19:55:10 +0000 (0:00:00.909) 0:00:22.747 ********* 2025-07-12 20:05:43.168510 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168518 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168525 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168533 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168541 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168549 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168557 | orchestrator | 2025-07-12 20:05:43.168565 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 20:05:43.168573 | orchestrator | Saturday 12 July 2025 19:55:11 +0000 (0:00:01.094) 0:00:23.841 ********* 2025-07-12 20:05:43.168580 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168588 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168596 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168604 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168612 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168625 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168633 | orchestrator | 2025-07-12 20:05:43.168641 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 20:05:43.168649 | orchestrator | Saturday 12 July 2025 19:55:12 +0000 (0:00:00.852) 0:00:24.696 ********* 2025-07-12 20:05:43.168657 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168672 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168680 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168688 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168696 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168703 | orchestrator | 2025-07-12 20:05:43.168711 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 20:05:43.168719 | orchestrator | Saturday 12 July 2025 19:55:13 +0000 (0:00:00.902) 0:00:25.599 ********* 2025-07-12 20:05:43.168727 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168735 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168743 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168758 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168766 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168774 | orchestrator | 2025-07-12 20:05:43.168782 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 20:05:43.168790 | orchestrator | Saturday 12 July 2025 19:55:13 +0000 (0:00:00.692) 0:00:26.291 ********* 2025-07-12 20:05:43.168798 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168806 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168813 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168821 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168829 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168837 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168845 | orchestrator | 2025-07-12 20:05:43.168852 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 20:05:43.168860 | orchestrator | Saturday 12 July 2025 19:55:14 +0000 (0:00:00.698) 0:00:26.990 ********* 2025-07-12 20:05:43.168868 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.168876 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.168884 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.168892 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.168900 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.168907 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.168915 | orchestrator | 2025-07-12 20:05:43.168923 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 20:05:43.168931 | orchestrator | Saturday 12 July 2025 19:55:15 +0000 (0:00:00.612) 0:00:27.603 ********* 2025-07-12 20:05:43.168940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.168948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.168957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.168991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part1', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part14', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part15', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part16', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169128 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.169136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169275 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.169284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 2025-07-12 20:05:43 | INFO  | Task fd9b9eef-ea56-4616-aeb8-d81d8894d46a is in state SUCCESS 2025-07-12 20:05:43.169316 | orchestrator | 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169325 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e', 'dm-uuid-LVM-E4eL0LCKh1BPKY8m2SRlztTYqYZwNxGHdCPWgbJnJgpsuF01ckXDgnYtveU2JBvH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b', 'dm-uuid-LVM-MhXrNIYhW041vv8F14dWtTjGOwNwuQZklnSVX9pu8rZxwvpajueBUzVQ08pTYxHG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169372 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.169381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169431 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aM5aYs-O17P-23z5-vw4u-RED1-bHgy-2Qq0cS', 'scsi-0QEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9', 'scsi-SQEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VAW22O-gcuB-Pls3-j1kL-HI2W-ihHd-pGfR7E', 'scsi-0QEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94', 'scsi-SQEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418', 'scsi-SQEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169485 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169493 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58', 'dm-uuid-LVM-lyZgmPFNbStq4ZjJ5YzNYvvGdw7sbdHU6rfBnK9q8FkkqwaYN2SALLc0g8VAlILf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791', 'dm-uuid-LVM-MEOVNephN7hzmyal4PNe2WbCkByuS3py5A19FOo2P8GuaxCIo2W6IWNF7okT5PDR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169560 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169570 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169578 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.169586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169602 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a', 'dm-uuid-LVM-TdYNvufdYHm7xfhdXH7cFx9dQQGYc1tDnH9PGQBkBNzkl3uLDheiVs9v9EI4xx3K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028', 'dm-uuid-LVM-O1VDBnk7la3dA9fvBRCK8INxI7gUKwapmWzNjIh5Dt5coqHLZWucSQFbeq1udFyd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xbaqud-QFcO-hkZ1-R2n7-smvj-mLc2-DdeLaP', 'scsi-0QEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7', 'scsi-SQEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169679 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9AqmV-XoN7-NG0Q-oNME-OAER-Ejob-7U7cUx', 'scsi-0QEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350', 'scsi-SQEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb', 'scsi-SQEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169745 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.169754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:05:43.169797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d2D0Ta-V3e8-9KEz-3AwB-4e3O-dsh5-WwWg4F', 'scsi-0QEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8', 'scsi-SQEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E59y2T-5LhQ-PhO1-6zgU-tgBF-5nbX-Or2zhA', 'scsi-0QEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28', 'scsi-SQEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169829 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914', 'scsi-SQEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:05:43.169855 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.169863 | orchestrator | 2025-07-12 20:05:43.169872 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 20:05:43.169880 | orchestrator | Saturday 12 July 2025 19:55:16 +0000 (0:00:01.493) 0:00:29.096 ********* 2025-07-12 20:05:43.169888 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.169897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.169912 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.169921 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.169929 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.169941 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170531 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170554 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170572 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170580 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170589 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170597 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170663 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170676 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170690 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170699 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170770 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part1', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part14', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part15', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part16', 'scsi-SQEMU_QEMU_HARDDISK_82ec485a-b082-4a6f-b189-87a9a1d03f41-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170790 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170833 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.170849 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part1', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part14', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part15', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part16', 'scsi-SQEMU_QEMU_HARDDISK_e4d3d755-d51e-4e4d-b58b-320c6a01a06b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170868 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.170963 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171039 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171048 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171057 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171082 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171091 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171162 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171175 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171191 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f7f6b2fc-7ea1-4b13-a66f-9ce072e2923e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171208 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171217 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.171275 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e', 'dm-uuid-LVM-E4eL0LCKh1BPKY8m2SRlztTYqYZwNxGHdCPWgbJnJgpsuF01ckXDgnYtveU2JBvH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b', 'dm-uuid-LVM-MhXrNIYhW041vv8F14dWtTjGOwNwuQZklnSVX9pu8rZxwvpajueBUzVQ08pTYxHG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171303 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171311 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.171319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171415 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58', 'dm-uuid-LVM-lyZgmPFNbStq4ZjJ5YzNYvvGdw7sbdHU6rfBnK9q8FkkqwaYN2SALLc0g8VAlILf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171423 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791', 'dm-uuid-LVM-MEOVNephN7hzmyal4PNe2WbCkByuS3py5A19FOo2P8GuaxCIo2W6IWNF7okT5PDR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171534 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171542 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171607 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171617 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aM5aYs-O17P-23z5-vw4u-RED1-bHgy-2Qq0cS', 'scsi-0QEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9', 'scsi-SQEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171633 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171640 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171647 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VAW22O-gcuB-Pls3-j1kL-HI2W-ihHd-pGfR7E', 'scsi-0QEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94', 'scsi-SQEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171723 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418', 'scsi-SQEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171789 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a', 'dm-uuid-LVM-TdYNvufdYHm7xfhdXH7cFx9dQQGYc1tDnH9PGQBkBNzkl3uLDheiVs9v9EI4xx3K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171801 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xbaqud-QFcO-hkZ1-R2n7-smvj-mLc2-DdeLaP', 'scsi-0QEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7', 'scsi-SQEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171808 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028', 'dm-uuid-LVM-O1VDBnk7la3dA9fvBRCK8INxI7gUKwapmWzNjIh5Dt5coqHLZWucSQFbeq1udFyd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171822 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9AqmV-XoN7-NG0Q-oNME-OAER-Ejob-7U7cUx', 'scsi-0QEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350', 'scsi-SQEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171879 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171889 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171896 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb', 'scsi-SQEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.171989 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172000 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172008 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172054 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172064 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.172133 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172151 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d2D0Ta-V3e8-9KEz-3AwB-4e3O-dsh5-WwWg4F', 'scsi-0QEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8', 'scsi-SQEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172158 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.172166 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E59y2T-5LhQ-PhO1-6zgU-tgBF-5nbX-Or2zhA', 'scsi-0QEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28', 'scsi-SQEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172173 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914', 'scsi-SQEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172188 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:05:43.172196 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.172203 | orchestrator | 2025-07-12 20:05:43.172210 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 20:05:43.172218 | orchestrator | Saturday 12 July 2025 19:55:18 +0000 (0:00:01.731) 0:00:30.828 ********* 2025-07-12 20:05:43.172225 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.172233 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.172240 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.172290 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.172299 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.172307 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.172314 | orchestrator | 2025-07-12 20:05:43.172321 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 20:05:43.172328 | orchestrator | Saturday 12 July 2025 19:55:19 +0000 (0:00:01.162) 0:00:31.990 ********* 2025-07-12 20:05:43.172335 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.172352 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.172359 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.172366 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.172372 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.172379 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.172386 | orchestrator | 2025-07-12 20:05:43.172392 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:05:43.172399 | orchestrator | Saturday 12 July 2025 19:55:20 +0000 (0:00:00.734) 0:00:32.724 ********* 2025-07-12 20:05:43.172406 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.172413 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.172420 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.172426 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.172433 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.172440 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.172492 | orchestrator | 2025-07-12 20:05:43.172501 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:05:43.172508 | orchestrator | Saturday 12 July 2025 19:55:21 +0000 (0:00:01.136) 0:00:33.861 ********* 2025-07-12 20:05:43.172515 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.172522 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.172529 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.172536 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.172543 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.172549 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.172556 | orchestrator | 2025-07-12 20:05:43.172563 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:05:43.172600 | orchestrator | Saturday 12 July 2025 19:55:21 +0000 (0:00:00.622) 0:00:34.484 ********* 2025-07-12 20:05:43.172608 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.172615 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.172621 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.172634 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.172641 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.172648 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.172654 | orchestrator | 2025-07-12 20:05:43.172661 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:05:43.172668 | orchestrator | Saturday 12 July 2025 19:55:22 +0000 (0:00:00.868) 0:00:35.352 ********* 2025-07-12 20:05:43.172675 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.172681 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.172688 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.172695 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.172702 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.172708 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.172715 | orchestrator | 2025-07-12 20:05:43.172726 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 20:05:43.172737 | orchestrator | Saturday 12 July 2025 19:55:23 +0000 (0:00:00.632) 0:00:35.985 ********* 2025-07-12 20:05:43.172748 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:05:43.172760 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-12 20:05:43.172771 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-12 20:05:43.172781 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:05:43.172792 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-12 20:05:43.172802 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 20:05:43.172813 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 20:05:43.172825 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-12 20:05:43.172835 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:05:43.172846 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 20:05:43.172857 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 20:05:43.172868 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 20:05:43.172879 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-12 20:05:43.172890 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-12 20:05:43.172902 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 20:05:43.172909 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 20:05:43.172915 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 20:05:43.172922 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 20:05:43.172929 | orchestrator | 2025-07-12 20:05:43.172935 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 20:05:43.172942 | orchestrator | Saturday 12 July 2025 19:55:25 +0000 (0:00:02.260) 0:00:38.245 ********* 2025-07-12 20:05:43.172949 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:05:43.172956 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:05:43.172962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:05:43.172986 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.172993 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 20:05:43.173000 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 20:05:43.173012 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 20:05:43.173019 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.173025 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 20:05:43.173032 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 20:05:43.173039 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 20:05:43.173045 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.173081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 20:05:43.173089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 20:05:43.173102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 20:05:43.173109 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173116 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 20:05:43.173122 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 20:05:43.173129 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 20:05:43.173136 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.173144 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 20:05:43.173152 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 20:05:43.173159 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 20:05:43.173167 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.173174 | orchestrator | 2025-07-12 20:05:43.173182 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 20:05:43.173190 | orchestrator | Saturday 12 July 2025 19:55:26 +0000 (0:00:00.908) 0:00:39.153 ********* 2025-07-12 20:05:43.173197 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.173205 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.173212 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.173220 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.173228 | orchestrator | 2025-07-12 20:05:43.173236 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 20:05:43.173244 | orchestrator | Saturday 12 July 2025 19:55:27 +0000 (0:00:01.274) 0:00:40.427 ********* 2025-07-12 20:05:43.173252 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173259 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.173267 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.173274 | orchestrator | 2025-07-12 20:05:43.173282 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 20:05:43.173289 | orchestrator | Saturday 12 July 2025 19:55:28 +0000 (0:00:00.373) 0:00:40.801 ********* 2025-07-12 20:05:43.173297 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173304 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.173312 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.173320 | orchestrator | 2025-07-12 20:05:43.173327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 20:05:43.173335 | orchestrator | Saturday 12 July 2025 19:55:28 +0000 (0:00:00.607) 0:00:41.409 ********* 2025-07-12 20:05:43.173343 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173351 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.173358 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.173365 | orchestrator | 2025-07-12 20:05:43.173373 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 20:05:43.173381 | orchestrator | Saturday 12 July 2025 19:55:29 +0000 (0:00:00.422) 0:00:41.831 ********* 2025-07-12 20:05:43.173388 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.173396 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.173403 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.173411 | orchestrator | 2025-07-12 20:05:43.173419 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 20:05:43.173426 | orchestrator | Saturday 12 July 2025 19:55:29 +0000 (0:00:00.452) 0:00:42.284 ********* 2025-07-12 20:05:43.173434 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.173442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.173450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.173457 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173465 | orchestrator | 2025-07-12 20:05:43.173472 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 20:05:43.173485 | orchestrator | Saturday 12 July 2025 19:55:30 +0000 (0:00:00.323) 0:00:42.607 ********* 2025-07-12 20:05:43.173493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.173500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.173508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.173515 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173523 | orchestrator | 2025-07-12 20:05:43.173530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 20:05:43.173536 | orchestrator | Saturday 12 July 2025 19:55:30 +0000 (0:00:00.340) 0:00:42.947 ********* 2025-07-12 20:05:43.173543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.173549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.173556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.173563 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173569 | orchestrator | 2025-07-12 20:05:43.173576 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 20:05:43.173583 | orchestrator | Saturday 12 July 2025 19:55:31 +0000 (0:00:00.569) 0:00:43.517 ********* 2025-07-12 20:05:43.173589 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.173596 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.173603 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.173609 | orchestrator | 2025-07-12 20:05:43.173616 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 20:05:43.173627 | orchestrator | Saturday 12 July 2025 19:55:31 +0000 (0:00:00.706) 0:00:44.223 ********* 2025-07-12 20:05:43.173634 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 20:05:43.173641 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:05:43.173648 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 20:05:43.173654 | orchestrator | 2025-07-12 20:05:43.173661 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 20:05:43.173668 | orchestrator | Saturday 12 July 2025 19:55:32 +0000 (0:00:00.663) 0:00:44.887 ********* 2025-07-12 20:05:43.173692 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:05:43.173700 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:05:43.173707 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:05:43.173714 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 20:05:43.173720 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:05:43.173727 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:05:43.173734 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:05:43.173741 | orchestrator | 2025-07-12 20:05:43.173747 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 20:05:43.173754 | orchestrator | Saturday 12 July 2025 19:55:33 +0000 (0:00:00.737) 0:00:45.624 ********* 2025-07-12 20:05:43.173761 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:05:43.173767 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:05:43.173774 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:05:43.173781 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 20:05:43.173788 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:05:43.173794 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:05:43.173801 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:05:43.173807 | orchestrator | 2025-07-12 20:05:43.173814 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.173825 | orchestrator | Saturday 12 July 2025 19:55:35 +0000 (0:00:02.570) 0:00:48.195 ********* 2025-07-12 20:05:43.173832 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.173840 | orchestrator | 2025-07-12 20:05:43.173847 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.173854 | orchestrator | Saturday 12 July 2025 19:55:36 +0000 (0:00:01.273) 0:00:49.468 ********* 2025-07-12 20:05:43.173860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.173867 | orchestrator | 2025-07-12 20:05:43.173874 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.173880 | orchestrator | Saturday 12 July 2025 19:55:38 +0000 (0:00:01.134) 0:00:50.602 ********* 2025-07-12 20:05:43.173887 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.173893 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.173900 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.173906 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.173913 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.173920 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.173926 | orchestrator | 2025-07-12 20:05:43.173933 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.173939 | orchestrator | Saturday 12 July 2025 19:55:39 +0000 (0:00:01.015) 0:00:51.618 ********* 2025-07-12 20:05:43.173946 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.173953 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.173959 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174002 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174010 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174037 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174046 | orchestrator | 2025-07-12 20:05:43.174053 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.174059 | orchestrator | Saturday 12 July 2025 19:55:40 +0000 (0:00:01.107) 0:00:52.726 ********* 2025-07-12 20:05:43.174066 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174073 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174080 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174086 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174093 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174100 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174107 | orchestrator | 2025-07-12 20:05:43.174113 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.174120 | orchestrator | Saturday 12 July 2025 19:55:41 +0000 (0:00:01.671) 0:00:54.398 ********* 2025-07-12 20:05:43.174127 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174133 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174140 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174147 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174153 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174160 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174167 | orchestrator | 2025-07-12 20:05:43.174173 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.174180 | orchestrator | Saturday 12 July 2025 19:55:42 +0000 (0:00:01.086) 0:00:55.484 ********* 2025-07-12 20:05:43.174187 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.174198 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174205 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.174211 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174218 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.174225 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174231 | orchestrator | 2025-07-12 20:05:43.174238 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.174249 | orchestrator | Saturday 12 July 2025 19:55:44 +0000 (0:00:01.393) 0:00:56.878 ********* 2025-07-12 20:05:43.174277 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174285 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174292 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174298 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174305 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174312 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174319 | orchestrator | 2025-07-12 20:05:43.174325 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.174332 | orchestrator | Saturday 12 July 2025 19:55:45 +0000 (0:00:00.977) 0:00:57.856 ********* 2025-07-12 20:05:43.174339 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174345 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174352 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174359 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174365 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174372 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174379 | orchestrator | 2025-07-12 20:05:43.174385 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.174392 | orchestrator | Saturday 12 July 2025 19:55:46 +0000 (0:00:01.287) 0:00:59.144 ********* 2025-07-12 20:05:43.174399 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.174405 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.174412 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.174419 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174425 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174432 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174439 | orchestrator | 2025-07-12 20:05:43.174445 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.174452 | orchestrator | Saturday 12 July 2025 19:55:48 +0000 (0:00:01.730) 0:01:00.875 ********* 2025-07-12 20:05:43.174459 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.174465 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.174472 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.174478 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174485 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174492 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174498 | orchestrator | 2025-07-12 20:05:43.174504 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.174510 | orchestrator | Saturday 12 July 2025 19:55:50 +0000 (0:00:01.887) 0:01:02.762 ********* 2025-07-12 20:05:43.174517 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174523 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174529 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174535 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174542 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174548 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174554 | orchestrator | 2025-07-12 20:05:43.174560 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.174566 | orchestrator | Saturday 12 July 2025 19:55:51 +0000 (0:00:01.222) 0:01:03.984 ********* 2025-07-12 20:05:43.174573 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.174579 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.174585 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.174591 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174597 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174604 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174610 | orchestrator | 2025-07-12 20:05:43.174616 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.174622 | orchestrator | Saturday 12 July 2025 19:55:52 +0000 (0:00:01.083) 0:01:05.068 ********* 2025-07-12 20:05:43.174628 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174639 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174652 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174658 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174664 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174671 | orchestrator | 2025-07-12 20:05:43.174677 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.174683 | orchestrator | Saturday 12 July 2025 19:55:53 +0000 (0:00:01.188) 0:01:06.256 ********* 2025-07-12 20:05:43.174689 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174696 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174702 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174708 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174714 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174720 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174726 | orchestrator | 2025-07-12 20:05:43.174733 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.174739 | orchestrator | Saturday 12 July 2025 19:55:54 +0000 (0:00:01.027) 0:01:07.284 ********* 2025-07-12 20:05:43.174745 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174751 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174757 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174764 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.174770 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.174776 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.174782 | orchestrator | 2025-07-12 20:05:43.174788 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.174795 | orchestrator | Saturday 12 July 2025 19:55:55 +0000 (0:00:00.786) 0:01:08.071 ********* 2025-07-12 20:05:43.174801 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174807 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174813 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174819 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174832 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174838 | orchestrator | 2025-07-12 20:05:43.174844 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.174850 | orchestrator | Saturday 12 July 2025 19:55:56 +0000 (0:00:00.797) 0:01:08.868 ********* 2025-07-12 20:05:43.174857 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.174863 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.174869 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.174876 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174882 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174888 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174894 | orchestrator | 2025-07-12 20:05:43.174901 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.174923 | orchestrator | Saturday 12 July 2025 19:55:56 +0000 (0:00:00.566) 0:01:09.435 ********* 2025-07-12 20:05:43.174930 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.174937 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.174943 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.174949 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.174955 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.174961 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.174982 | orchestrator | 2025-07-12 20:05:43.174989 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.174995 | orchestrator | Saturday 12 July 2025 19:55:57 +0000 (0:00:00.813) 0:01:10.249 ********* 2025-07-12 20:05:43.175002 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.175008 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.175014 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.175021 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.175027 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.175033 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.175044 | orchestrator | 2025-07-12 20:05:43.175051 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.175057 | orchestrator | Saturday 12 July 2025 19:55:58 +0000 (0:00:00.582) 0:01:10.832 ********* 2025-07-12 20:05:43.175063 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.175069 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.175076 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.175082 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.175088 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.175094 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.175100 | orchestrator | 2025-07-12 20:05:43.175107 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-12 20:05:43.175113 | orchestrator | Saturday 12 July 2025 19:55:59 +0000 (0:00:01.311) 0:01:12.143 ********* 2025-07-12 20:05:43.175119 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.175126 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.175162 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.175170 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.175176 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.175182 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.175188 | orchestrator | 2025-07-12 20:05:43.175195 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-12 20:05:43.175201 | orchestrator | Saturday 12 July 2025 19:56:01 +0000 (0:00:01.868) 0:01:14.012 ********* 2025-07-12 20:05:43.175208 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.175214 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.175220 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.175226 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.175232 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.175239 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.175245 | orchestrator | 2025-07-12 20:05:43.175251 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-12 20:05:43.175257 | orchestrator | Saturday 12 July 2025 19:56:03 +0000 (0:00:02.161) 0:01:16.173 ********* 2025-07-12 20:05:43.175264 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.175270 | orchestrator | 2025-07-12 20:05:43.175276 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-12 20:05:43.175282 | orchestrator | Saturday 12 July 2025 19:56:05 +0000 (0:00:01.353) 0:01:17.527 ********* 2025-07-12 20:05:43.175289 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.175295 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.175301 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.175308 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.175314 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.175320 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.175326 | orchestrator | 2025-07-12 20:05:43.175332 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-12 20:05:43.175339 | orchestrator | Saturday 12 July 2025 19:56:05 +0000 (0:00:00.899) 0:01:18.426 ********* 2025-07-12 20:05:43.175345 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.175351 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.175357 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.175363 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.175370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.175376 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.175382 | orchestrator | 2025-07-12 20:05:43.175388 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-12 20:05:43.175395 | orchestrator | Saturday 12 July 2025 19:56:06 +0000 (0:00:00.601) 0:01:19.028 ********* 2025-07-12 20:05:43.175401 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:05:43.175407 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:05:43.175418 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:05:43.175424 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:05:43.175430 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:05:43.175436 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:05:43.175443 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:05:43.175449 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:05:43.175458 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:05:43.175464 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:05:43.175471 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:05:43.175477 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:05:43.175483 | orchestrator | 2025-07-12 20:05:43.175508 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-12 20:05:43.175515 | orchestrator | Saturday 12 July 2025 19:56:08 +0000 (0:00:01.669) 0:01:20.698 ********* 2025-07-12 20:05:43.175521 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.175528 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.175534 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.175540 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.175546 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.175552 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.175558 | orchestrator | 2025-07-12 20:05:43.175565 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-12 20:05:43.175571 | orchestrator | Saturday 12 July 2025 19:56:09 +0000 (0:00:00.913) 0:01:21.611 ********* 2025-07-12 20:05:43.175577 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.175583 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.175589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.175595 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.175601 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.175607 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.175613 | orchestrator | 2025-07-12 20:05:43.175619 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-12 20:05:43.175626 | orchestrator | Saturday 12 July 2025 19:56:10 +0000 (0:00:00.943) 0:01:22.554 ********* 2025-07-12 20:05:43.175632 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.175638 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.175644 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.175650 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.175656 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.175662 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.175668 | orchestrator | 2025-07-12 20:05:43.175674 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-12 20:05:43.175680 | orchestrator | Saturday 12 July 2025 19:56:10 +0000 (0:00:00.617) 0:01:23.172 ********* 2025-07-12 20:05:43.175687 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.175693 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.175699 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.175705 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.175711 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.175717 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.175723 | orchestrator | 2025-07-12 20:05:43.175729 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-12 20:05:43.175735 | orchestrator | Saturday 12 July 2025 19:56:11 +0000 (0:00:00.804) 0:01:23.976 ********* 2025-07-12 20:05:43.175747 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.175753 | orchestrator | 2025-07-12 20:05:43.175759 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-12 20:05:43.175766 | orchestrator | Saturday 12 July 2025 19:56:12 +0000 (0:00:01.175) 0:01:25.152 ********* 2025-07-12 20:05:43.175772 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.175778 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.175784 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.175790 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.175796 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.175802 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.175808 | orchestrator | 2025-07-12 20:05:43.175814 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-12 20:05:43.175821 | orchestrator | Saturday 12 July 2025 19:57:17 +0000 (0:01:04.600) 0:02:29.753 ********* 2025-07-12 20:05:43.175827 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:05:43.175833 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:05:43.175839 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:05:43.175845 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.175851 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:05:43.175857 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:05:43.175863 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:05:43.175870 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.175876 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:05:43.175882 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:05:43.175888 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:05:43.175894 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.175900 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:05:43.175906 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:05:43.175912 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:05:43.175918 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.175925 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:05:43.175931 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:05:43.175940 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:05:43.175946 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.175952 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:05:43.175959 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:05:43.175992 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:05:43.176018 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176026 | orchestrator | 2025-07-12 20:05:43.176032 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-12 20:05:43.176038 | orchestrator | Saturday 12 July 2025 19:57:18 +0000 (0:00:00.814) 0:02:30.568 ********* 2025-07-12 20:05:43.176044 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176050 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176056 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176063 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176069 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176080 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176087 | orchestrator | 2025-07-12 20:05:43.176093 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-12 20:05:43.176099 | orchestrator | Saturday 12 July 2025 19:57:18 +0000 (0:00:00.488) 0:02:31.056 ********* 2025-07-12 20:05:43.176105 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176111 | orchestrator | 2025-07-12 20:05:43.176118 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-12 20:05:43.176124 | orchestrator | Saturday 12 July 2025 19:57:18 +0000 (0:00:00.140) 0:02:31.196 ********* 2025-07-12 20:05:43.176130 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176136 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176142 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176148 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176154 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176160 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176166 | orchestrator | 2025-07-12 20:05:43.176173 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-12 20:05:43.176179 | orchestrator | Saturday 12 July 2025 19:57:19 +0000 (0:00:00.663) 0:02:31.860 ********* 2025-07-12 20:05:43.176185 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176197 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176203 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176209 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176215 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176222 | orchestrator | 2025-07-12 20:05:43.176228 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-12 20:05:43.176234 | orchestrator | Saturday 12 July 2025 19:57:19 +0000 (0:00:00.575) 0:02:32.435 ********* 2025-07-12 20:05:43.176240 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176246 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176252 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176258 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176264 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176270 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176276 | orchestrator | 2025-07-12 20:05:43.176283 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-12 20:05:43.176289 | orchestrator | Saturday 12 July 2025 19:57:20 +0000 (0:00:00.851) 0:02:33.287 ********* 2025-07-12 20:05:43.176295 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.176301 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.176306 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.176311 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.176317 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.176322 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.176328 | orchestrator | 2025-07-12 20:05:43.176333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-12 20:05:43.176338 | orchestrator | Saturday 12 July 2025 19:57:23 +0000 (0:00:02.345) 0:02:35.632 ********* 2025-07-12 20:05:43.176344 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.176349 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.176354 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.176360 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.176365 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.176370 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.176375 | orchestrator | 2025-07-12 20:05:43.176381 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-12 20:05:43.176386 | orchestrator | Saturday 12 July 2025 19:57:23 +0000 (0:00:00.847) 0:02:36.480 ********* 2025-07-12 20:05:43.176392 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.176398 | orchestrator | 2025-07-12 20:05:43.176404 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-12 20:05:43.176413 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:01.022) 0:02:37.502 ********* 2025-07-12 20:05:43.176418 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176423 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176429 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176434 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176439 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176445 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176450 | orchestrator | 2025-07-12 20:05:43.176455 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-12 20:05:43.176461 | orchestrator | Saturday 12 July 2025 19:57:25 +0000 (0:00:00.677) 0:02:38.180 ********* 2025-07-12 20:05:43.176466 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176472 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176477 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176482 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176488 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176493 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176498 | orchestrator | 2025-07-12 20:05:43.176504 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-12 20:05:43.176512 | orchestrator | Saturday 12 July 2025 19:57:26 +0000 (0:00:00.842) 0:02:39.022 ********* 2025-07-12 20:05:43.176517 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176523 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176528 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176533 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176539 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176544 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176549 | orchestrator | 2025-07-12 20:05:43.176554 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-12 20:05:43.176574 | orchestrator | Saturday 12 July 2025 19:57:27 +0000 (0:00:00.597) 0:02:39.619 ********* 2025-07-12 20:05:43.176581 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176586 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176592 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176597 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176603 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176608 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176613 | orchestrator | 2025-07-12 20:05:43.176619 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-12 20:05:43.176624 | orchestrator | Saturday 12 July 2025 19:57:27 +0000 (0:00:00.792) 0:02:40.412 ********* 2025-07-12 20:05:43.176629 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176640 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176645 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176656 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176661 | orchestrator | 2025-07-12 20:05:43.176667 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-12 20:05:43.176672 | orchestrator | Saturday 12 July 2025 19:57:28 +0000 (0:00:00.663) 0:02:41.075 ********* 2025-07-12 20:05:43.176677 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176683 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176688 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176693 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176698 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176704 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176709 | orchestrator | 2025-07-12 20:05:43.176715 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-12 20:05:43.176720 | orchestrator | Saturday 12 July 2025 19:57:29 +0000 (0:00:00.822) 0:02:41.898 ********* 2025-07-12 20:05:43.176729 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176735 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176740 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176746 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176751 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176756 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176761 | orchestrator | 2025-07-12 20:05:43.176767 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-12 20:05:43.176772 | orchestrator | Saturday 12 July 2025 19:57:30 +0000 (0:00:00.634) 0:02:42.532 ********* 2025-07-12 20:05:43.176778 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.176783 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.176788 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.176794 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.176799 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.176804 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.176810 | orchestrator | 2025-07-12 20:05:43.176815 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-12 20:05:43.176821 | orchestrator | Saturday 12 July 2025 19:57:30 +0000 (0:00:00.724) 0:02:43.257 ********* 2025-07-12 20:05:43.176826 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.176831 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.176837 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.176842 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.176847 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.176853 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.176858 | orchestrator | 2025-07-12 20:05:43.176864 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-12 20:05:43.176869 | orchestrator | Saturday 12 July 2025 19:57:32 +0000 (0:00:01.362) 0:02:44.619 ********* 2025-07-12 20:05:43.176875 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.176880 | orchestrator | 2025-07-12 20:05:43.176885 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-12 20:05:43.176891 | orchestrator | Saturday 12 July 2025 19:57:33 +0000 (0:00:00.974) 0:02:45.594 ********* 2025-07-12 20:05:43.176896 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-12 20:05:43.176901 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-12 20:05:43.176907 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-12 20:05:43.176912 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-12 20:05:43.176918 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-12 20:05:43.176923 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-12 20:05:43.176928 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-12 20:05:43.176934 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-12 20:05:43.176939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-12 20:05:43.176944 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-12 20:05:43.176950 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-12 20:05:43.176955 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-12 20:05:43.176960 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-12 20:05:43.176977 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-12 20:05:43.176982 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-12 20:05:43.176988 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-12 20:05:43.176996 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-12 20:05:43.177002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-12 20:05:43.177007 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-12 20:05:43.177019 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-12 20:05:43.177025 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-12 20:05:43.177046 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-12 20:05:43.177052 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-12 20:05:43.177058 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-12 20:05:43.177063 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-12 20:05:43.177068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-12 20:05:43.177074 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-12 20:05:43.177079 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-12 20:05:43.177085 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-12 20:05:43.177090 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-12 20:05:43.177095 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-12 20:05:43.177101 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-12 20:05:43.177106 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-12 20:05:43.177111 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-12 20:05:43.177117 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-12 20:05:43.177122 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-12 20:05:43.177128 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-12 20:05:43.177133 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-12 20:05:43.177138 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-12 20:05:43.177144 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-12 20:05:43.177149 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-12 20:05:43.177155 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-12 20:05:43.177160 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:05:43.177166 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:05:43.177171 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:05:43.177176 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:05:43.177182 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:05:43.177187 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:05:43.177193 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:05:43.177198 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:05:43.177203 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:05:43.177209 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:05:43.177214 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:05:43.177220 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:05:43.177225 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:05:43.177230 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:05:43.177236 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:05:43.177241 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:05:43.177246 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:05:43.177252 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:05:43.177257 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:05:43.177266 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:05:43.177272 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:05:43.177277 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:05:43.177282 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:05:43.177288 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:05:43.177293 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:05:43.177299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:05:43.177304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:05:43.177309 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:05:43.177315 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:05:43.177320 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:05:43.177325 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:05:43.177331 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:05:43.177336 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:05:43.177345 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:05:43.177350 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:05:43.177356 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:05:43.177361 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:05:43.177381 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:05:43.177387 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:05:43.177393 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:05:43.177398 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-12 20:05:43.177404 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:05:43.177409 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-12 20:05:43.177415 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-12 20:05:43.177420 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-12 20:05:43.177426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:05:43.177431 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-12 20:05:43.177436 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-12 20:05:43.177442 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-12 20:05:43.177447 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-12 20:05:43.177453 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-12 20:05:43.177458 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-12 20:05:43.177463 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-12 20:05:43.177469 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-12 20:05:43.177474 | orchestrator | 2025-07-12 20:05:43.177480 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-12 20:05:43.177485 | orchestrator | Saturday 12 July 2025 19:57:39 +0000 (0:00:06.678) 0:02:52.273 ********* 2025-07-12 20:05:43.177491 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177496 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177502 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177507 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.177517 | orchestrator | 2025-07-12 20:05:43.177523 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-12 20:05:43.177528 | orchestrator | Saturday 12 July 2025 19:57:40 +0000 (0:00:01.019) 0:02:53.293 ********* 2025-07-12 20:05:43.177534 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.177540 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.177545 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.177551 | orchestrator | 2025-07-12 20:05:43.177556 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-12 20:05:43.177562 | orchestrator | Saturday 12 July 2025 19:57:41 +0000 (0:00:00.732) 0:02:54.026 ********* 2025-07-12 20:05:43.177567 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.177573 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.177578 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.177583 | orchestrator | 2025-07-12 20:05:43.177589 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-12 20:05:43.177594 | orchestrator | Saturday 12 July 2025 19:57:43 +0000 (0:00:01.543) 0:02:55.569 ********* 2025-07-12 20:05:43.177600 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177605 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177611 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177616 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.177622 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.177627 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.177633 | orchestrator | 2025-07-12 20:05:43.177638 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-12 20:05:43.177644 | orchestrator | Saturday 12 July 2025 19:57:43 +0000 (0:00:00.659) 0:02:56.229 ********* 2025-07-12 20:05:43.177649 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177654 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177660 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177665 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.177670 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.177676 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.177681 | orchestrator | 2025-07-12 20:05:43.177687 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-12 20:05:43.177692 | orchestrator | Saturday 12 July 2025 19:57:44 +0000 (0:00:00.756) 0:02:56.986 ********* 2025-07-12 20:05:43.177698 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177703 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177709 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177714 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.177722 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.177728 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.177733 | orchestrator | 2025-07-12 20:05:43.177739 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-12 20:05:43.177744 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:00.582) 0:02:57.568 ********* 2025-07-12 20:05:43.177750 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177755 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177775 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177781 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.177786 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.177792 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.177803 | orchestrator | 2025-07-12 20:05:43.177808 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-12 20:05:43.177814 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:00.689) 0:02:58.257 ********* 2025-07-12 20:05:43.177819 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177825 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177830 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177836 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.177841 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.177846 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.177852 | orchestrator | 2025-07-12 20:05:43.177857 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-12 20:05:43.177863 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.521) 0:02:58.779 ********* 2025-07-12 20:05:43.177868 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177874 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177879 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177884 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.177890 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.177895 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.177900 | orchestrator | 2025-07-12 20:05:43.177906 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-12 20:05:43.177911 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.741) 0:02:59.521 ********* 2025-07-12 20:05:43.177917 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177922 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177928 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177933 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.177938 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.177944 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.177949 | orchestrator | 2025-07-12 20:05:43.177954 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-12 20:05:43.177960 | orchestrator | Saturday 12 July 2025 19:57:47 +0000 (0:00:00.626) 0:03:00.147 ********* 2025-07-12 20:05:43.177976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.177982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.177987 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.177992 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.177998 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178003 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178009 | orchestrator | 2025-07-12 20:05:43.178040 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-12 20:05:43.178047 | orchestrator | Saturday 12 July 2025 19:57:48 +0000 (0:00:00.778) 0:03:00.926 ********* 2025-07-12 20:05:43.178053 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178058 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178064 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178069 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.178075 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.178080 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.178086 | orchestrator | 2025-07-12 20:05:43.178091 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-12 20:05:43.178097 | orchestrator | Saturday 12 July 2025 19:57:51 +0000 (0:00:03.197) 0:03:04.124 ********* 2025-07-12 20:05:43.178102 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178107 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178118 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.178124 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.178129 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.178134 | orchestrator | 2025-07-12 20:05:43.178140 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-12 20:05:43.178150 | orchestrator | Saturday 12 July 2025 19:57:52 +0000 (0:00:00.751) 0:03:04.876 ********* 2025-07-12 20:05:43.178155 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178160 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178166 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178171 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.178177 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.178182 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.178187 | orchestrator | 2025-07-12 20:05:43.178193 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-12 20:05:43.178198 | orchestrator | Saturday 12 July 2025 19:57:53 +0000 (0:00:00.731) 0:03:05.607 ********* 2025-07-12 20:05:43.178204 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178209 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178215 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178220 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178225 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178231 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178236 | orchestrator | 2025-07-12 20:05:43.178242 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-12 20:05:43.178247 | orchestrator | Saturday 12 July 2025 19:57:54 +0000 (0:00:01.038) 0:03:06.646 ********* 2025-07-12 20:05:43.178253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178258 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178263 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178269 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.178277 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.178283 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.178288 | orchestrator | 2025-07-12 20:05:43.178294 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-12 20:05:43.178316 | orchestrator | Saturday 12 July 2025 19:57:54 +0000 (0:00:00.687) 0:03:07.333 ********* 2025-07-12 20:05:43.178322 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178328 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178333 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178340 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-12 20:05:43.178347 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-12 20:05:43.178353 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-12 20:05:43.178359 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-12 20:05:43.178365 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178380 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-12 20:05:43.178385 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-12 20:05:43.178391 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178397 | orchestrator | 2025-07-12 20:05:43.178402 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-12 20:05:43.178407 | orchestrator | Saturday 12 July 2025 19:57:55 +0000 (0:00:00.835) 0:03:08.169 ********* 2025-07-12 20:05:43.178413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178418 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178423 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178429 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178434 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178439 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178445 | orchestrator | 2025-07-12 20:05:43.178450 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-12 20:05:43.178456 | orchestrator | Saturday 12 July 2025 19:57:56 +0000 (0:00:00.779) 0:03:08.949 ********* 2025-07-12 20:05:43.178461 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178466 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178472 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178477 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178482 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178488 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178493 | orchestrator | 2025-07-12 20:05:43.178498 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 20:05:43.178504 | orchestrator | Saturday 12 July 2025 19:57:57 +0000 (0:00:00.801) 0:03:09.751 ********* 2025-07-12 20:05:43.178509 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178515 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178520 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178525 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178531 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178536 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178541 | orchestrator | 2025-07-12 20:05:43.178547 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 20:05:43.178552 | orchestrator | Saturday 12 July 2025 19:57:58 +0000 (0:00:00.749) 0:03:10.500 ********* 2025-07-12 20:05:43.178557 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178563 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178568 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178574 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178587 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178593 | orchestrator | 2025-07-12 20:05:43.178598 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 20:05:43.178604 | orchestrator | Saturday 12 July 2025 19:57:59 +0000 (0:00:01.158) 0:03:11.658 ********* 2025-07-12 20:05:43.178609 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178615 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178670 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178696 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.178703 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.178708 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.178720 | orchestrator | 2025-07-12 20:05:43.178726 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 20:05:43.178731 | orchestrator | Saturday 12 July 2025 19:57:59 +0000 (0:00:00.748) 0:03:12.406 ********* 2025-07-12 20:05:43.178737 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178742 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178748 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178753 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.178758 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.178764 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.178769 | orchestrator | 2025-07-12 20:05:43.178775 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 20:05:43.178780 | orchestrator | Saturday 12 July 2025 19:58:01 +0000 (0:00:01.288) 0:03:13.695 ********* 2025-07-12 20:05:43.178785 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 20:05:43.178791 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 20:05:43.178796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 20:05:43.178802 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178807 | orchestrator | 2025-07-12 20:05:43.178812 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 20:05:43.178818 | orchestrator | Saturday 12 July 2025 19:58:01 +0000 (0:00:00.481) 0:03:14.176 ********* 2025-07-12 20:05:43.178823 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 20:05:43.178829 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 20:05:43.178834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 20:05:43.178839 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178845 | orchestrator | 2025-07-12 20:05:43.178850 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 20:05:43.178856 | orchestrator | Saturday 12 July 2025 19:58:02 +0000 (0:00:00.442) 0:03:14.619 ********* 2025-07-12 20:05:43.178861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 20:05:43.178866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 20:05:43.178872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 20:05:43.178877 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178882 | orchestrator | 2025-07-12 20:05:43.178888 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 20:05:43.178893 | orchestrator | Saturday 12 July 2025 19:58:02 +0000 (0:00:00.450) 0:03:15.069 ********* 2025-07-12 20:05:43.178899 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178909 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178915 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.178920 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.178926 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.178931 | orchestrator | 2025-07-12 20:05:43.178936 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 20:05:43.178942 | orchestrator | Saturday 12 July 2025 19:58:03 +0000 (0:00:00.747) 0:03:15.816 ********* 2025-07-12 20:05:43.178947 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-12 20:05:43.178953 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.178958 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-12 20:05:43.178963 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.178981 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-12 20:05:43.178986 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.178992 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:05:43.178997 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 20:05:43.179002 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 20:05:43.179008 | orchestrator | 2025-07-12 20:05:43.179013 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-12 20:05:43.179023 | orchestrator | Saturday 12 July 2025 19:58:05 +0000 (0:00:02.484) 0:03:18.300 ********* 2025-07-12 20:05:43.179028 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.179034 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.179039 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.179044 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.179050 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.179055 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.179060 | orchestrator | 2025-07-12 20:05:43.179066 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:05:43.179071 | orchestrator | Saturday 12 July 2025 19:58:08 +0000 (0:00:02.898) 0:03:21.199 ********* 2025-07-12 20:05:43.179076 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.179082 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.179087 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.179092 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.179098 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.179103 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.179108 | orchestrator | 2025-07-12 20:05:43.179114 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 20:05:43.179119 | orchestrator | Saturday 12 July 2025 19:58:09 +0000 (0:00:01.175) 0:03:22.374 ********* 2025-07-12 20:05:43.179125 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179130 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.179135 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.179141 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.179146 | orchestrator | 2025-07-12 20:05:43.179155 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 20:05:43.179161 | orchestrator | Saturday 12 July 2025 19:58:10 +0000 (0:00:00.972) 0:03:23.347 ********* 2025-07-12 20:05:43.179166 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.179171 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.179177 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.179182 | orchestrator | 2025-07-12 20:05:43.179188 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 20:05:43.179209 | orchestrator | Saturday 12 July 2025 19:58:11 +0000 (0:00:00.292) 0:03:23.640 ********* 2025-07-12 20:05:43.179215 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.179221 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.179226 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.179232 | orchestrator | 2025-07-12 20:05:43.179237 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 20:05:43.179243 | orchestrator | Saturday 12 July 2025 19:58:12 +0000 (0:00:01.413) 0:03:25.053 ********* 2025-07-12 20:05:43.179248 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:05:43.179253 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:05:43.179259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:05:43.179264 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.179270 | orchestrator | 2025-07-12 20:05:43.179275 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 20:05:43.179280 | orchestrator | Saturday 12 July 2025 19:58:13 +0000 (0:00:00.521) 0:03:25.575 ********* 2025-07-12 20:05:43.179286 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.179291 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.179296 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.179302 | orchestrator | 2025-07-12 20:05:43.179307 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 20:05:43.179313 | orchestrator | Saturday 12 July 2025 19:58:13 +0000 (0:00:00.299) 0:03:25.875 ********* 2025-07-12 20:05:43.179318 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.179324 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.179329 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.179339 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.179344 | orchestrator | 2025-07-12 20:05:43.179350 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 20:05:43.179355 | orchestrator | Saturday 12 July 2025 19:58:14 +0000 (0:00:00.841) 0:03:26.716 ********* 2025-07-12 20:05:43.179360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.179366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.179371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.179377 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179382 | orchestrator | 2025-07-12 20:05:43.179387 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 20:05:43.179393 | orchestrator | Saturday 12 July 2025 19:58:14 +0000 (0:00:00.302) 0:03:27.018 ********* 2025-07-12 20:05:43.179398 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179404 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.179409 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.179414 | orchestrator | 2025-07-12 20:05:43.179420 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 20:05:43.179425 | orchestrator | Saturday 12 July 2025 19:58:14 +0000 (0:00:00.262) 0:03:27.281 ********* 2025-07-12 20:05:43.179431 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179436 | orchestrator | 2025-07-12 20:05:43.179441 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 20:05:43.179447 | orchestrator | Saturday 12 July 2025 19:58:14 +0000 (0:00:00.166) 0:03:27.448 ********* 2025-07-12 20:05:43.179452 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179458 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.179463 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.179468 | orchestrator | 2025-07-12 20:05:43.179474 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 20:05:43.179479 | orchestrator | Saturday 12 July 2025 19:58:15 +0000 (0:00:00.254) 0:03:27.702 ********* 2025-07-12 20:05:43.179484 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179490 | orchestrator | 2025-07-12 20:05:43.179495 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 20:05:43.179500 | orchestrator | Saturday 12 July 2025 19:58:15 +0000 (0:00:00.202) 0:03:27.905 ********* 2025-07-12 20:05:43.179506 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179511 | orchestrator | 2025-07-12 20:05:43.179517 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 20:05:43.179522 | orchestrator | Saturday 12 July 2025 19:58:15 +0000 (0:00:00.219) 0:03:28.125 ********* 2025-07-12 20:05:43.179527 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179533 | orchestrator | 2025-07-12 20:05:43.179538 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 20:05:43.179544 | orchestrator | Saturday 12 July 2025 19:58:15 +0000 (0:00:00.261) 0:03:28.386 ********* 2025-07-12 20:05:43.179549 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179554 | orchestrator | 2025-07-12 20:05:43.179560 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 20:05:43.179565 | orchestrator | Saturday 12 July 2025 19:58:16 +0000 (0:00:00.165) 0:03:28.552 ********* 2025-07-12 20:05:43.179571 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179576 | orchestrator | 2025-07-12 20:05:43.179581 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 20:05:43.179587 | orchestrator | Saturday 12 July 2025 19:58:16 +0000 (0:00:00.176) 0:03:28.728 ********* 2025-07-12 20:05:43.179592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.179597 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.179603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.179617 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179623 | orchestrator | 2025-07-12 20:05:43.179628 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 20:05:43.179634 | orchestrator | Saturday 12 July 2025 19:58:16 +0000 (0:00:00.344) 0:03:29.072 ********* 2025-07-12 20:05:43.179639 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179645 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.179650 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.179655 | orchestrator | 2025-07-12 20:05:43.179676 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 20:05:43.179682 | orchestrator | Saturday 12 July 2025 19:58:16 +0000 (0:00:00.239) 0:03:29.312 ********* 2025-07-12 20:05:43.179688 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179693 | orchestrator | 2025-07-12 20:05:43.179699 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 20:05:43.179704 | orchestrator | Saturday 12 July 2025 19:58:17 +0000 (0:00:00.185) 0:03:29.497 ********* 2025-07-12 20:05:43.179709 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179715 | orchestrator | 2025-07-12 20:05:43.179720 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 20:05:43.179725 | orchestrator | Saturday 12 July 2025 19:58:17 +0000 (0:00:00.211) 0:03:29.709 ********* 2025-07-12 20:05:43.179731 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.179736 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.179741 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.179747 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.179752 | orchestrator | 2025-07-12 20:05:43.179758 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 20:05:43.179763 | orchestrator | Saturday 12 July 2025 19:58:18 +0000 (0:00:00.956) 0:03:30.666 ********* 2025-07-12 20:05:43.179768 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.179774 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.179779 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.179785 | orchestrator | 2025-07-12 20:05:43.179790 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 20:05:43.179796 | orchestrator | Saturday 12 July 2025 19:58:18 +0000 (0:00:00.275) 0:03:30.941 ********* 2025-07-12 20:05:43.179801 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.179806 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.179812 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.179817 | orchestrator | 2025-07-12 20:05:43.179822 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 20:05:43.179828 | orchestrator | Saturday 12 July 2025 19:58:19 +0000 (0:00:01.183) 0:03:32.125 ********* 2025-07-12 20:05:43.179833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.179839 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.179844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.179849 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.179855 | orchestrator | 2025-07-12 20:05:43.179860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 20:05:43.179866 | orchestrator | Saturday 12 July 2025 19:58:20 +0000 (0:00:01.010) 0:03:33.135 ********* 2025-07-12 20:05:43.179871 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.179876 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.179882 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.179887 | orchestrator | 2025-07-12 20:05:43.179893 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 20:05:43.179898 | orchestrator | Saturday 12 July 2025 19:58:21 +0000 (0:00:00.389) 0:03:33.525 ********* 2025-07-12 20:05:43.179903 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.179909 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.179914 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.179923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.179929 | orchestrator | 2025-07-12 20:05:43.179934 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 20:05:43.179940 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:00.979) 0:03:34.505 ********* 2025-07-12 20:05:43.179945 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.179950 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.179956 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.179961 | orchestrator | 2025-07-12 20:05:43.179998 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 20:05:43.180004 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:00.375) 0:03:34.881 ********* 2025-07-12 20:05:43.180009 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.180015 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.180020 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.180025 | orchestrator | 2025-07-12 20:05:43.180031 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 20:05:43.180036 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:01.381) 0:03:36.262 ********* 2025-07-12 20:05:43.180042 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.180047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.180052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.180058 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.180063 | orchestrator | 2025-07-12 20:05:43.180068 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 20:05:43.180074 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.839) 0:03:37.101 ********* 2025-07-12 20:05:43.180079 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.180084 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.180090 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.180095 | orchestrator | 2025-07-12 20:05:43.180101 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-12 20:05:43.180106 | orchestrator | Saturday 12 July 2025 19:58:25 +0000 (0:00:00.414) 0:03:37.516 ********* 2025-07-12 20:05:43.180111 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180120 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180126 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.180137 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.180142 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.180147 | orchestrator | 2025-07-12 20:05:43.180153 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 20:05:43.180158 | orchestrator | Saturday 12 July 2025 19:58:25 +0000 (0:00:00.826) 0:03:38.342 ********* 2025-07-12 20:05:43.180181 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.180187 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.180193 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.180199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.180204 | orchestrator | 2025-07-12 20:05:43.180210 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 20:05:43.180215 | orchestrator | Saturday 12 July 2025 19:58:26 +0000 (0:00:00.826) 0:03:39.168 ********* 2025-07-12 20:05:43.180220 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180226 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180231 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180237 | orchestrator | 2025-07-12 20:05:43.180242 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 20:05:43.180247 | orchestrator | Saturday 12 July 2025 19:58:27 +0000 (0:00:00.343) 0:03:39.512 ********* 2025-07-12 20:05:43.180253 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.180262 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.180268 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.180273 | orchestrator | 2025-07-12 20:05:43.180278 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 20:05:43.180284 | orchestrator | Saturday 12 July 2025 19:58:28 +0000 (0:00:01.206) 0:03:40.718 ********* 2025-07-12 20:05:43.180289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:05:43.180295 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:05:43.180300 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:05:43.180305 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180311 | orchestrator | 2025-07-12 20:05:43.180316 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 20:05:43.180322 | orchestrator | Saturday 12 July 2025 19:58:29 +0000 (0:00:00.972) 0:03:41.691 ********* 2025-07-12 20:05:43.180327 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180332 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180338 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180343 | orchestrator | 2025-07-12 20:05:43.180349 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-12 20:05:43.180354 | orchestrator | 2025-07-12 20:05:43.180359 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.180365 | orchestrator | Saturday 12 July 2025 19:58:30 +0000 (0:00:00.975) 0:03:42.666 ********* 2025-07-12 20:05:43.180370 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.180376 | orchestrator | 2025-07-12 20:05:43.180381 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.180390 | orchestrator | Saturday 12 July 2025 19:58:30 +0000 (0:00:00.619) 0:03:43.286 ********* 2025-07-12 20:05:43.180399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.180407 | orchestrator | 2025-07-12 20:05:43.180417 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.180425 | orchestrator | Saturday 12 July 2025 19:58:31 +0000 (0:00:00.810) 0:03:44.096 ********* 2025-07-12 20:05:43.180433 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180440 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180447 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180455 | orchestrator | 2025-07-12 20:05:43.180462 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.180470 | orchestrator | Saturday 12 July 2025 19:58:32 +0000 (0:00:00.759) 0:03:44.856 ********* 2025-07-12 20:05:43.180478 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180487 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180495 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180502 | orchestrator | 2025-07-12 20:05:43.180511 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.180519 | orchestrator | Saturday 12 July 2025 19:58:32 +0000 (0:00:00.352) 0:03:45.209 ********* 2025-07-12 20:05:43.180528 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180533 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180538 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180543 | orchestrator | 2025-07-12 20:05:43.180548 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.180553 | orchestrator | Saturday 12 July 2025 19:58:33 +0000 (0:00:00.373) 0:03:45.582 ********* 2025-07-12 20:05:43.180558 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180562 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180567 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180572 | orchestrator | 2025-07-12 20:05:43.180577 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.180582 | orchestrator | Saturday 12 July 2025 19:58:33 +0000 (0:00:00.572) 0:03:46.155 ********* 2025-07-12 20:05:43.180591 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180596 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180601 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180606 | orchestrator | 2025-07-12 20:05:43.180611 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.180616 | orchestrator | Saturday 12 July 2025 19:58:34 +0000 (0:00:00.742) 0:03:46.897 ********* 2025-07-12 20:05:43.180620 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180625 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180630 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180635 | orchestrator | 2025-07-12 20:05:43.180646 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.180651 | orchestrator | Saturday 12 July 2025 19:58:34 +0000 (0:00:00.312) 0:03:47.210 ********* 2025-07-12 20:05:43.180656 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180661 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180665 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180670 | orchestrator | 2025-07-12 20:05:43.180675 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.180697 | orchestrator | Saturday 12 July 2025 19:58:35 +0000 (0:00:00.289) 0:03:47.499 ********* 2025-07-12 20:05:43.180703 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180708 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180713 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180718 | orchestrator | 2025-07-12 20:05:43.180723 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.180728 | orchestrator | Saturday 12 July 2025 19:58:36 +0000 (0:00:01.029) 0:03:48.528 ********* 2025-07-12 20:05:43.180733 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180737 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180742 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180747 | orchestrator | 2025-07-12 20:05:43.180752 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.180757 | orchestrator | Saturday 12 July 2025 19:58:36 +0000 (0:00:00.773) 0:03:49.302 ********* 2025-07-12 20:05:43.180762 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180767 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180772 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180776 | orchestrator | 2025-07-12 20:05:43.180781 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.180786 | orchestrator | Saturday 12 July 2025 19:58:37 +0000 (0:00:00.385) 0:03:49.688 ********* 2025-07-12 20:05:43.180791 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.180796 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.180801 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.180805 | orchestrator | 2025-07-12 20:05:43.180810 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.180815 | orchestrator | Saturday 12 July 2025 19:58:37 +0000 (0:00:00.400) 0:03:50.088 ********* 2025-07-12 20:05:43.180820 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180825 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180830 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180835 | orchestrator | 2025-07-12 20:05:43.180839 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.180844 | orchestrator | Saturday 12 July 2025 19:58:38 +0000 (0:00:00.584) 0:03:50.673 ********* 2025-07-12 20:05:43.180849 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180854 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180859 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180863 | orchestrator | 2025-07-12 20:05:43.180868 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.180873 | orchestrator | Saturday 12 July 2025 19:58:38 +0000 (0:00:00.319) 0:03:50.993 ********* 2025-07-12 20:05:43.180878 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180886 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180891 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180896 | orchestrator | 2025-07-12 20:05:43.180901 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.180906 | orchestrator | Saturday 12 July 2025 19:58:38 +0000 (0:00:00.300) 0:03:51.293 ********* 2025-07-12 20:05:43.180915 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180923 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180931 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.180938 | orchestrator | 2025-07-12 20:05:43.180946 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.180953 | orchestrator | Saturday 12 July 2025 19:58:39 +0000 (0:00:00.312) 0:03:51.606 ********* 2025-07-12 20:05:43.180961 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.180986 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.180994 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.181002 | orchestrator | 2025-07-12 20:05:43.181010 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.181018 | orchestrator | Saturday 12 July 2025 19:58:39 +0000 (0:00:00.583) 0:03:52.189 ********* 2025-07-12 20:05:43.181026 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181034 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181038 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181043 | orchestrator | 2025-07-12 20:05:43.181048 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.181053 | orchestrator | Saturday 12 July 2025 19:58:40 +0000 (0:00:00.376) 0:03:52.566 ********* 2025-07-12 20:05:43.181058 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181063 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181067 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181072 | orchestrator | 2025-07-12 20:05:43.181077 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.181082 | orchestrator | Saturday 12 July 2025 19:58:40 +0000 (0:00:00.320) 0:03:52.886 ********* 2025-07-12 20:05:43.181087 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181091 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181096 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181101 | orchestrator | 2025-07-12 20:05:43.181106 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-12 20:05:43.181110 | orchestrator | Saturday 12 July 2025 19:58:41 +0000 (0:00:00.785) 0:03:53.672 ********* 2025-07-12 20:05:43.181115 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181120 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181125 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181129 | orchestrator | 2025-07-12 20:05:43.181134 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-12 20:05:43.181139 | orchestrator | Saturday 12 July 2025 19:58:41 +0000 (0:00:00.338) 0:03:54.011 ********* 2025-07-12 20:05:43.181144 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.181149 | orchestrator | 2025-07-12 20:05:43.181154 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-12 20:05:43.181162 | orchestrator | Saturday 12 July 2025 19:58:42 +0000 (0:00:00.584) 0:03:54.595 ********* 2025-07-12 20:05:43.181167 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.181172 | orchestrator | 2025-07-12 20:05:43.181177 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-12 20:05:43.181182 | orchestrator | Saturday 12 July 2025 19:58:42 +0000 (0:00:00.158) 0:03:54.753 ********* 2025-07-12 20:05:43.181187 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-12 20:05:43.181192 | orchestrator | 2025-07-12 20:05:43.181214 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-12 20:05:43.181220 | orchestrator | Saturday 12 July 2025 19:58:43 +0000 (0:00:01.498) 0:03:56.252 ********* 2025-07-12 20:05:43.181229 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181234 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181239 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181244 | orchestrator | 2025-07-12 20:05:43.181249 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-12 20:05:43.181254 | orchestrator | Saturday 12 July 2025 19:58:44 +0000 (0:00:00.366) 0:03:56.618 ********* 2025-07-12 20:05:43.181258 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181263 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181268 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181273 | orchestrator | 2025-07-12 20:05:43.181278 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-12 20:05:43.181283 | orchestrator | Saturday 12 July 2025 19:58:44 +0000 (0:00:00.336) 0:03:56.954 ********* 2025-07-12 20:05:43.181288 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181292 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181297 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181302 | orchestrator | 2025-07-12 20:05:43.181307 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-12 20:05:43.181312 | orchestrator | Saturday 12 July 2025 19:58:45 +0000 (0:00:01.101) 0:03:58.055 ********* 2025-07-12 20:05:43.181317 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181322 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181326 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181331 | orchestrator | 2025-07-12 20:05:43.181336 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-12 20:05:43.181341 | orchestrator | Saturday 12 July 2025 19:58:46 +0000 (0:00:01.157) 0:03:59.213 ********* 2025-07-12 20:05:43.181346 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181351 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181356 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181360 | orchestrator | 2025-07-12 20:05:43.181365 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-12 20:05:43.181370 | orchestrator | Saturday 12 July 2025 19:58:47 +0000 (0:00:00.765) 0:03:59.979 ********* 2025-07-12 20:05:43.181375 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181380 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181385 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181389 | orchestrator | 2025-07-12 20:05:43.181394 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-12 20:05:43.181399 | orchestrator | Saturday 12 July 2025 19:58:48 +0000 (0:00:00.758) 0:04:00.738 ********* 2025-07-12 20:05:43.181404 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181409 | orchestrator | 2025-07-12 20:05:43.181414 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-12 20:05:43.181419 | orchestrator | Saturday 12 July 2025 19:58:49 +0000 (0:00:01.372) 0:04:02.110 ********* 2025-07-12 20:05:43.181423 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181428 | orchestrator | 2025-07-12 20:05:43.181433 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-12 20:05:43.181438 | orchestrator | Saturday 12 July 2025 19:58:50 +0000 (0:00:00.742) 0:04:02.852 ********* 2025-07-12 20:05:43.181443 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:05:43.181448 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.181453 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.181457 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:05:43.181462 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-12 20:05:43.181468 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:05:43.181472 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:05:43.181477 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-12 20:05:43.181482 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:05:43.181490 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-12 20:05:43.181495 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-12 20:05:43.181500 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-12 20:05:43.181505 | orchestrator | 2025-07-12 20:05:43.181510 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-12 20:05:43.181515 | orchestrator | Saturday 12 July 2025 19:58:54 +0000 (0:00:03.648) 0:04:06.501 ********* 2025-07-12 20:05:43.181519 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181524 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181529 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181534 | orchestrator | 2025-07-12 20:05:43.181539 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-12 20:05:43.181544 | orchestrator | Saturday 12 July 2025 19:58:55 +0000 (0:00:01.451) 0:04:07.953 ********* 2025-07-12 20:05:43.181548 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181553 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181558 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181563 | orchestrator | 2025-07-12 20:05:43.181568 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-12 20:05:43.181573 | orchestrator | Saturday 12 July 2025 19:58:55 +0000 (0:00:00.325) 0:04:08.278 ********* 2025-07-12 20:05:43.181578 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181582 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.181587 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181592 | orchestrator | 2025-07-12 20:05:43.181600 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-12 20:05:43.181605 | orchestrator | Saturday 12 July 2025 19:58:56 +0000 (0:00:00.322) 0:04:08.601 ********* 2025-07-12 20:05:43.181610 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181615 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181620 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181624 | orchestrator | 2025-07-12 20:05:43.181629 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-12 20:05:43.181647 | orchestrator | Saturday 12 July 2025 19:58:57 +0000 (0:00:01.782) 0:04:10.384 ********* 2025-07-12 20:05:43.181652 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181657 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181662 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181667 | orchestrator | 2025-07-12 20:05:43.181672 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-12 20:05:43.181677 | orchestrator | Saturday 12 July 2025 19:58:59 +0000 (0:00:01.621) 0:04:12.005 ********* 2025-07-12 20:05:43.181682 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.181686 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.181691 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.181696 | orchestrator | 2025-07-12 20:05:43.181701 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-12 20:05:43.181706 | orchestrator | Saturday 12 July 2025 19:58:59 +0000 (0:00:00.308) 0:04:12.313 ********* 2025-07-12 20:05:43.181711 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.181715 | orchestrator | 2025-07-12 20:05:43.181720 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-12 20:05:43.181725 | orchestrator | Saturday 12 July 2025 19:59:00 +0000 (0:00:00.535) 0:04:12.849 ********* 2025-07-12 20:05:43.181730 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.181735 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.181740 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.181744 | orchestrator | 2025-07-12 20:05:43.181749 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-12 20:05:43.181754 | orchestrator | Saturday 12 July 2025 19:59:00 +0000 (0:00:00.590) 0:04:13.439 ********* 2025-07-12 20:05:43.181759 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.181768 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.181773 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.181777 | orchestrator | 2025-07-12 20:05:43.181782 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-12 20:05:43.181787 | orchestrator | Saturday 12 July 2025 19:59:01 +0000 (0:00:00.306) 0:04:13.745 ********* 2025-07-12 20:05:43.181792 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.181797 | orchestrator | 2025-07-12 20:05:43.181802 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-12 20:05:43.181807 | orchestrator | Saturday 12 July 2025 19:59:01 +0000 (0:00:00.568) 0:04:14.314 ********* 2025-07-12 20:05:43.181811 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181816 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181821 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181826 | orchestrator | 2025-07-12 20:05:43.181831 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-12 20:05:43.181836 | orchestrator | Saturday 12 July 2025 19:59:03 +0000 (0:00:01.710) 0:04:16.024 ********* 2025-07-12 20:05:43.181841 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181845 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181850 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181855 | orchestrator | 2025-07-12 20:05:43.181860 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-12 20:05:43.181865 | orchestrator | Saturday 12 July 2025 19:59:04 +0000 (0:00:01.142) 0:04:17.166 ********* 2025-07-12 20:05:43.181870 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181875 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181879 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181884 | orchestrator | 2025-07-12 20:05:43.181889 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-12 20:05:43.181894 | orchestrator | Saturday 12 July 2025 19:59:06 +0000 (0:00:01.700) 0:04:18.867 ********* 2025-07-12 20:05:43.181899 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.181904 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.181908 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.181913 | orchestrator | 2025-07-12 20:05:43.181918 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-12 20:05:43.181923 | orchestrator | Saturday 12 July 2025 19:59:08 +0000 (0:00:01.776) 0:04:20.644 ********* 2025-07-12 20:05:43.181928 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.181933 | orchestrator | 2025-07-12 20:05:43.181937 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-12 20:05:43.181942 | orchestrator | Saturday 12 July 2025 19:59:08 +0000 (0:00:00.840) 0:04:21.485 ********* 2025-07-12 20:05:43.181947 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.181952 | orchestrator | 2025-07-12 20:05:43.181957 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-12 20:05:43.181962 | orchestrator | Saturday 12 July 2025 19:59:10 +0000 (0:00:01.186) 0:04:22.671 ********* 2025-07-12 20:05:43.181993 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.181999 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182004 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182009 | orchestrator | 2025-07-12 20:05:43.182031 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-12 20:05:43.182037 | orchestrator | Saturday 12 July 2025 19:59:19 +0000 (0:00:09.322) 0:04:31.994 ********* 2025-07-12 20:05:43.182042 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182047 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182052 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182057 | orchestrator | 2025-07-12 20:05:43.182062 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-12 20:05:43.182069 | orchestrator | Saturday 12 July 2025 19:59:19 +0000 (0:00:00.313) 0:04:32.307 ********* 2025-07-12 20:05:43.182096 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-12 20:05:43.182104 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-12 20:05:43.182110 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-12 20:05:43.182116 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-12 20:05:43.182121 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-12 20:05:43.182127 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__03f3773fd507d9ffb0d7d506811110c432f59336'}])  2025-07-12 20:05:43.182133 | orchestrator | 2025-07-12 20:05:43.182138 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:05:43.182143 | orchestrator | Saturday 12 July 2025 19:59:35 +0000 (0:00:15.396) 0:04:47.704 ********* 2025-07-12 20:05:43.182148 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182152 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182157 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182162 | orchestrator | 2025-07-12 20:05:43.182167 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 20:05:43.182172 | orchestrator | Saturday 12 July 2025 19:59:35 +0000 (0:00:00.298) 0:04:48.002 ********* 2025-07-12 20:05:43.182176 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.182181 | orchestrator | 2025-07-12 20:05:43.182186 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 20:05:43.182191 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.549) 0:04:48.552 ********* 2025-07-12 20:05:43.182196 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182200 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182205 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182210 | orchestrator | 2025-07-12 20:05:43.182215 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 20:05:43.182220 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.253) 0:04:48.806 ********* 2025-07-12 20:05:43.182228 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182233 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182237 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182242 | orchestrator | 2025-07-12 20:05:43.182247 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 20:05:43.182252 | orchestrator | Saturday 12 July 2025 19:59:36 +0000 (0:00:00.285) 0:04:49.092 ********* 2025-07-12 20:05:43.182257 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:05:43.182262 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:05:43.182266 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:05:43.182271 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182276 | orchestrator | 2025-07-12 20:05:43.182281 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 20:05:43.182286 | orchestrator | Saturday 12 July 2025 19:59:37 +0000 (0:00:00.686) 0:04:49.778 ********* 2025-07-12 20:05:43.182291 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182298 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182303 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182308 | orchestrator | 2025-07-12 20:05:43.182313 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-12 20:05:43.182318 | orchestrator | 2025-07-12 20:05:43.182322 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.182327 | orchestrator | Saturday 12 July 2025 19:59:37 +0000 (0:00:00.663) 0:04:50.441 ********* 2025-07-12 20:05:43.182346 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.182351 | orchestrator | 2025-07-12 20:05:43.182356 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.182361 | orchestrator | Saturday 12 July 2025 19:59:38 +0000 (0:00:00.424) 0:04:50.866 ********* 2025-07-12 20:05:43.182366 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.182371 | orchestrator | 2025-07-12 20:05:43.182376 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.182381 | orchestrator | Saturday 12 July 2025 19:59:38 +0000 (0:00:00.588) 0:04:51.455 ********* 2025-07-12 20:05:43.182386 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182390 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182395 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182400 | orchestrator | 2025-07-12 20:05:43.182405 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.182410 | orchestrator | Saturday 12 July 2025 19:59:39 +0000 (0:00:00.699) 0:04:52.154 ********* 2025-07-12 20:05:43.182415 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182419 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182424 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182429 | orchestrator | 2025-07-12 20:05:43.182434 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.182439 | orchestrator | Saturday 12 July 2025 19:59:39 +0000 (0:00:00.261) 0:04:52.415 ********* 2025-07-12 20:05:43.182444 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182453 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182458 | orchestrator | 2025-07-12 20:05:43.182463 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.182468 | orchestrator | Saturday 12 July 2025 19:59:40 +0000 (0:00:00.399) 0:04:52.815 ********* 2025-07-12 20:05:43.182472 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182477 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182482 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182487 | orchestrator | 2025-07-12 20:05:43.182492 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.182500 | orchestrator | Saturday 12 July 2025 19:59:40 +0000 (0:00:00.277) 0:04:53.093 ********* 2025-07-12 20:05:43.182505 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182510 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182515 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182520 | orchestrator | 2025-07-12 20:05:43.182525 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.182529 | orchestrator | Saturday 12 July 2025 19:59:41 +0000 (0:00:00.642) 0:04:53.736 ********* 2025-07-12 20:05:43.182534 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182539 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182543 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182548 | orchestrator | 2025-07-12 20:05:43.182552 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.182557 | orchestrator | Saturday 12 July 2025 19:59:41 +0000 (0:00:00.261) 0:04:53.997 ********* 2025-07-12 20:05:43.182562 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182566 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182571 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182576 | orchestrator | 2025-07-12 20:05:43.182580 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.182585 | orchestrator | Saturday 12 July 2025 19:59:41 +0000 (0:00:00.438) 0:04:54.436 ********* 2025-07-12 20:05:43.182589 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182594 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182598 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182603 | orchestrator | 2025-07-12 20:05:43.182608 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.182612 | orchestrator | Saturday 12 July 2025 19:59:42 +0000 (0:00:00.729) 0:04:55.166 ********* 2025-07-12 20:05:43.182617 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182621 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182626 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182631 | orchestrator | 2025-07-12 20:05:43.182635 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.182640 | orchestrator | Saturday 12 July 2025 19:59:43 +0000 (0:00:00.739) 0:04:55.905 ********* 2025-07-12 20:05:43.182645 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182649 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182654 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182658 | orchestrator | 2025-07-12 20:05:43.182663 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.182667 | orchestrator | Saturday 12 July 2025 19:59:43 +0000 (0:00:00.254) 0:04:56.159 ********* 2025-07-12 20:05:43.182672 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182677 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182681 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182686 | orchestrator | 2025-07-12 20:05:43.182690 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.182695 | orchestrator | Saturday 12 July 2025 19:59:44 +0000 (0:00:00.459) 0:04:56.619 ********* 2025-07-12 20:05:43.182700 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182704 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182709 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182713 | orchestrator | 2025-07-12 20:05:43.182718 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.182725 | orchestrator | Saturday 12 July 2025 19:59:44 +0000 (0:00:00.281) 0:04:56.900 ********* 2025-07-12 20:05:43.182730 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182735 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182739 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182744 | orchestrator | 2025-07-12 20:05:43.182749 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.182753 | orchestrator | Saturday 12 July 2025 19:59:44 +0000 (0:00:00.264) 0:04:57.165 ********* 2025-07-12 20:05:43.182773 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182779 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182784 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182788 | orchestrator | 2025-07-12 20:05:43.182793 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.182798 | orchestrator | Saturday 12 July 2025 19:59:44 +0000 (0:00:00.262) 0:04:57.427 ********* 2025-07-12 20:05:43.182802 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182807 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182811 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182816 | orchestrator | 2025-07-12 20:05:43.182820 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.182825 | orchestrator | Saturday 12 July 2025 19:59:45 +0000 (0:00:00.461) 0:04:57.889 ********* 2025-07-12 20:05:43.182830 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.182834 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.182839 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.182843 | orchestrator | 2025-07-12 20:05:43.182848 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.182852 | orchestrator | Saturday 12 July 2025 19:59:45 +0000 (0:00:00.290) 0:04:58.179 ********* 2025-07-12 20:05:43.182857 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182862 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182866 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182871 | orchestrator | 2025-07-12 20:05:43.182876 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.182880 | orchestrator | Saturday 12 July 2025 19:59:45 +0000 (0:00:00.284) 0:04:58.464 ********* 2025-07-12 20:05:43.182885 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182889 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182894 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182898 | orchestrator | 2025-07-12 20:05:43.182903 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.182908 | orchestrator | Saturday 12 July 2025 19:59:46 +0000 (0:00:00.314) 0:04:58.779 ********* 2025-07-12 20:05:43.182912 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.182917 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.182922 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.182926 | orchestrator | 2025-07-12 20:05:43.182931 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-12 20:05:43.182935 | orchestrator | Saturday 12 July 2025 19:59:46 +0000 (0:00:00.610) 0:04:59.389 ********* 2025-07-12 20:05:43.182940 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:05:43.182945 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:05:43.182949 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:05:43.182954 | orchestrator | 2025-07-12 20:05:43.182959 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-12 20:05:43.182963 | orchestrator | Saturday 12 July 2025 19:59:47 +0000 (0:00:00.594) 0:04:59.983 ********* 2025-07-12 20:05:43.182978 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.182983 | orchestrator | 2025-07-12 20:05:43.182987 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-12 20:05:43.182992 | orchestrator | Saturday 12 July 2025 19:59:47 +0000 (0:00:00.436) 0:05:00.420 ********* 2025-07-12 20:05:43.182996 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183001 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183005 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183010 | orchestrator | 2025-07-12 20:05:43.183014 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-12 20:05:43.183019 | orchestrator | Saturday 12 July 2025 19:59:48 +0000 (0:00:00.838) 0:05:01.258 ********* 2025-07-12 20:05:43.183027 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183032 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.183036 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.183041 | orchestrator | 2025-07-12 20:05:43.183045 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-12 20:05:43.183050 | orchestrator | Saturday 12 July 2025 19:59:49 +0000 (0:00:00.305) 0:05:01.563 ********* 2025-07-12 20:05:43.183055 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:05:43.183059 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:05:43.183064 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:05:43.183068 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-12 20:05:43.183073 | orchestrator | 2025-07-12 20:05:43.183077 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-12 20:05:43.183082 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:10.313) 0:05:11.877 ********* 2025-07-12 20:05:43.183086 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.183091 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.183096 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.183100 | orchestrator | 2025-07-12 20:05:43.183105 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-12 20:05:43.183109 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:00.330) 0:05:12.208 ********* 2025-07-12 20:05:43.183114 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 20:05:43.183118 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:05:43.183123 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:05:43.183127 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 20:05:43.183132 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.183141 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.183146 | orchestrator | 2025-07-12 20:05:43.183151 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:05:43.183155 | orchestrator | Saturday 12 July 2025 20:00:02 +0000 (0:00:02.320) 0:05:14.529 ********* 2025-07-12 20:05:43.183160 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 20:05:43.183164 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:05:43.183182 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:05:43.183187 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:05:43.183192 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 20:05:43.183196 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 20:05:43.183201 | orchestrator | 2025-07-12 20:05:43.183205 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-12 20:05:43.183210 | orchestrator | Saturday 12 July 2025 20:00:03 +0000 (0:00:01.426) 0:05:15.955 ********* 2025-07-12 20:05:43.183215 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.183219 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.183224 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.183228 | orchestrator | 2025-07-12 20:05:43.183233 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-12 20:05:43.183237 | orchestrator | Saturday 12 July 2025 20:00:04 +0000 (0:00:00.608) 0:05:16.564 ********* 2025-07-12 20:05:43.183242 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183246 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.183251 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.183255 | orchestrator | 2025-07-12 20:05:43.183260 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-12 20:05:43.183264 | orchestrator | Saturday 12 July 2025 20:00:04 +0000 (0:00:00.288) 0:05:16.853 ********* 2025-07-12 20:05:43.183269 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183274 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.183281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.183286 | orchestrator | 2025-07-12 20:05:43.183291 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-12 20:05:43.183295 | orchestrator | Saturday 12 July 2025 20:00:04 +0000 (0:00:00.266) 0:05:17.120 ********* 2025-07-12 20:05:43.183300 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.183304 | orchestrator | 2025-07-12 20:05:43.183309 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-12 20:05:43.183314 | orchestrator | Saturday 12 July 2025 20:00:05 +0000 (0:00:00.639) 0:05:17.759 ********* 2025-07-12 20:05:43.183318 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183323 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.183327 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.183332 | orchestrator | 2025-07-12 20:05:43.183336 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-12 20:05:43.183341 | orchestrator | Saturday 12 July 2025 20:00:05 +0000 (0:00:00.271) 0:05:18.031 ********* 2025-07-12 20:05:43.183346 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183350 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.183355 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.183359 | orchestrator | 2025-07-12 20:05:43.183364 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-12 20:05:43.183368 | orchestrator | Saturday 12 July 2025 20:00:05 +0000 (0:00:00.279) 0:05:18.311 ********* 2025-07-12 20:05:43.183373 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.183378 | orchestrator | 2025-07-12 20:05:43.183382 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-12 20:05:43.183387 | orchestrator | Saturday 12 July 2025 20:00:06 +0000 (0:00:00.644) 0:05:18.955 ********* 2025-07-12 20:05:43.183391 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183396 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183400 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183405 | orchestrator | 2025-07-12 20:05:43.183410 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-12 20:05:43.183414 | orchestrator | Saturday 12 July 2025 20:00:07 +0000 (0:00:01.211) 0:05:20.167 ********* 2025-07-12 20:05:43.183419 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183423 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183428 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183432 | orchestrator | 2025-07-12 20:05:43.183437 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-12 20:05:43.183442 | orchestrator | Saturday 12 July 2025 20:00:08 +0000 (0:00:01.138) 0:05:21.306 ********* 2025-07-12 20:05:43.183446 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183451 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183455 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183460 | orchestrator | 2025-07-12 20:05:43.183464 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-12 20:05:43.183469 | orchestrator | Saturday 12 July 2025 20:00:10 +0000 (0:00:01.921) 0:05:23.227 ********* 2025-07-12 20:05:43.183474 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183478 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183483 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183487 | orchestrator | 2025-07-12 20:05:43.183492 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-12 20:05:43.183496 | orchestrator | Saturday 12 July 2025 20:00:13 +0000 (0:00:02.323) 0:05:25.550 ********* 2025-07-12 20:05:43.183501 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183505 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.183510 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-12 20:05:43.183514 | orchestrator | 2025-07-12 20:05:43.183519 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-12 20:05:43.183527 | orchestrator | Saturday 12 July 2025 20:00:13 +0000 (0:00:00.344) 0:05:25.894 ********* 2025-07-12 20:05:43.183534 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-12 20:05:43.183539 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-12 20:05:43.183544 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-12 20:05:43.183560 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-12 20:05:43.183566 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-12 20:05:43.183570 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-07-12 20:05:43.183575 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.183579 | orchestrator | 2025-07-12 20:05:43.183584 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-12 20:05:43.183589 | orchestrator | Saturday 12 July 2025 20:00:49 +0000 (0:00:36.367) 0:06:02.262 ********* 2025-07-12 20:05:43.183593 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.183598 | orchestrator | 2025-07-12 20:05:43.183602 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-12 20:05:43.183607 | orchestrator | Saturday 12 July 2025 20:00:51 +0000 (0:00:01.451) 0:06:03.713 ********* 2025-07-12 20:05:43.183611 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.183616 | orchestrator | 2025-07-12 20:05:43.183620 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-12 20:05:43.183625 | orchestrator | Saturday 12 July 2025 20:00:51 +0000 (0:00:00.640) 0:06:04.354 ********* 2025-07-12 20:05:43.183629 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.183634 | orchestrator | 2025-07-12 20:05:43.183638 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-12 20:05:43.183643 | orchestrator | Saturday 12 July 2025 20:00:51 +0000 (0:00:00.123) 0:06:04.477 ********* 2025-07-12 20:05:43.183647 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-12 20:05:43.183652 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-12 20:05:43.183657 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-12 20:05:43.183661 | orchestrator | 2025-07-12 20:05:43.183666 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-12 20:05:43.183670 | orchestrator | Saturday 12 July 2025 20:00:58 +0000 (0:00:06.312) 0:06:10.789 ********* 2025-07-12 20:05:43.183675 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-12 20:05:43.183679 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-12 20:05:43.183684 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-12 20:05:43.183688 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-12 20:05:43.183693 | orchestrator | 2025-07-12 20:05:43.183697 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:05:43.183702 | orchestrator | Saturday 12 July 2025 20:01:03 +0000 (0:00:04.771) 0:06:15.561 ********* 2025-07-12 20:05:43.183706 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183711 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183716 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183720 | orchestrator | 2025-07-12 20:05:43.183725 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 20:05:43.183729 | orchestrator | Saturday 12 July 2025 20:01:04 +0000 (0:00:00.979) 0:06:16.540 ********* 2025-07-12 20:05:43.183734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:05:43.183741 | orchestrator | 2025-07-12 20:05:43.183746 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 20:05:43.183751 | orchestrator | Saturday 12 July 2025 20:01:04 +0000 (0:00:00.577) 0:06:17.117 ********* 2025-07-12 20:05:43.183755 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.183760 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.183764 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.183769 | orchestrator | 2025-07-12 20:05:43.183774 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 20:05:43.183778 | orchestrator | Saturday 12 July 2025 20:01:04 +0000 (0:00:00.280) 0:06:17.398 ********* 2025-07-12 20:05:43.183783 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.183787 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.183792 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.183796 | orchestrator | 2025-07-12 20:05:43.183801 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 20:05:43.183805 | orchestrator | Saturday 12 July 2025 20:01:06 +0000 (0:00:01.488) 0:06:18.887 ********* 2025-07-12 20:05:43.183810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:05:43.183814 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:05:43.183819 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:05:43.183824 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.183828 | orchestrator | 2025-07-12 20:05:43.183833 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 20:05:43.183837 | orchestrator | Saturday 12 July 2025 20:01:06 +0000 (0:00:00.551) 0:06:19.438 ********* 2025-07-12 20:05:43.183842 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.183846 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.183851 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.183856 | orchestrator | 2025-07-12 20:05:43.183860 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-12 20:05:43.183865 | orchestrator | 2025-07-12 20:05:43.183872 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.183876 | orchestrator | Saturday 12 July 2025 20:01:07 +0000 (0:00:00.465) 0:06:19.904 ********* 2025-07-12 20:05:43.183881 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.183886 | orchestrator | 2025-07-12 20:05:43.183890 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.183907 | orchestrator | Saturday 12 July 2025 20:01:08 +0000 (0:00:00.728) 0:06:20.633 ********* 2025-07-12 20:05:43.183913 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.183917 | orchestrator | 2025-07-12 20:05:43.183922 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.183926 | orchestrator | Saturday 12 July 2025 20:01:08 +0000 (0:00:00.509) 0:06:21.142 ********* 2025-07-12 20:05:43.183931 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.183936 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.183940 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.183945 | orchestrator | 2025-07-12 20:05:43.183949 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.183954 | orchestrator | Saturday 12 July 2025 20:01:08 +0000 (0:00:00.305) 0:06:21.448 ********* 2025-07-12 20:05:43.183958 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.183963 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.183979 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.183983 | orchestrator | 2025-07-12 20:05:43.183988 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.183993 | orchestrator | Saturday 12 July 2025 20:01:09 +0000 (0:00:01.004) 0:06:22.453 ********* 2025-07-12 20:05:43.184002 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184006 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184011 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184016 | orchestrator | 2025-07-12 20:05:43.184020 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.184025 | orchestrator | Saturday 12 July 2025 20:01:10 +0000 (0:00:00.789) 0:06:23.242 ********* 2025-07-12 20:05:43.184029 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184034 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184038 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184043 | orchestrator | 2025-07-12 20:05:43.184048 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.184052 | orchestrator | Saturday 12 July 2025 20:01:11 +0000 (0:00:00.736) 0:06:23.978 ********* 2025-07-12 20:05:43.184057 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184061 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184066 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184070 | orchestrator | 2025-07-12 20:05:43.184075 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.184079 | orchestrator | Saturday 12 July 2025 20:01:11 +0000 (0:00:00.308) 0:06:24.286 ********* 2025-07-12 20:05:43.184084 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184088 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184093 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184097 | orchestrator | 2025-07-12 20:05:43.184102 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.184106 | orchestrator | Saturday 12 July 2025 20:01:12 +0000 (0:00:00.555) 0:06:24.842 ********* 2025-07-12 20:05:43.184111 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184115 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184120 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184124 | orchestrator | 2025-07-12 20:05:43.184129 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.184134 | orchestrator | Saturday 12 July 2025 20:01:12 +0000 (0:00:00.313) 0:06:25.155 ********* 2025-07-12 20:05:43.184138 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184143 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184147 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184152 | orchestrator | 2025-07-12 20:05:43.184156 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.184161 | orchestrator | Saturday 12 July 2025 20:01:13 +0000 (0:00:00.720) 0:06:25.876 ********* 2025-07-12 20:05:43.184165 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184170 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184174 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184179 | orchestrator | 2025-07-12 20:05:43.184183 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.184188 | orchestrator | Saturday 12 July 2025 20:01:14 +0000 (0:00:00.710) 0:06:26.586 ********* 2025-07-12 20:05:43.184192 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184197 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184202 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184206 | orchestrator | 2025-07-12 20:05:43.184211 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.184215 | orchestrator | Saturday 12 July 2025 20:01:14 +0000 (0:00:00.529) 0:06:27.115 ********* 2025-07-12 20:05:43.184220 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184224 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184229 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184233 | orchestrator | 2025-07-12 20:05:43.184238 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.184242 | orchestrator | Saturday 12 July 2025 20:01:14 +0000 (0:00:00.309) 0:06:27.424 ********* 2025-07-12 20:05:43.184247 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184251 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184259 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184264 | orchestrator | 2025-07-12 20:05:43.184269 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.184273 | orchestrator | Saturday 12 July 2025 20:01:15 +0000 (0:00:00.344) 0:06:27.769 ********* 2025-07-12 20:05:43.184278 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184282 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184287 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184291 | orchestrator | 2025-07-12 20:05:43.184296 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.184303 | orchestrator | Saturday 12 July 2025 20:01:15 +0000 (0:00:00.344) 0:06:28.113 ********* 2025-07-12 20:05:43.184308 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184313 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184317 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184322 | orchestrator | 2025-07-12 20:05:43.184326 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.184331 | orchestrator | Saturday 12 July 2025 20:01:16 +0000 (0:00:00.518) 0:06:28.632 ********* 2025-07-12 20:05:43.184337 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184342 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184347 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184351 | orchestrator | 2025-07-12 20:05:43.184356 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.184360 | orchestrator | Saturday 12 July 2025 20:01:16 +0000 (0:00:00.305) 0:06:28.938 ********* 2025-07-12 20:05:43.184365 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184374 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184379 | orchestrator | 2025-07-12 20:05:43.184383 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.184388 | orchestrator | Saturday 12 July 2025 20:01:16 +0000 (0:00:00.298) 0:06:29.236 ********* 2025-07-12 20:05:43.184392 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184397 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184401 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184406 | orchestrator | 2025-07-12 20:05:43.184410 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.184415 | orchestrator | Saturday 12 July 2025 20:01:17 +0000 (0:00:00.293) 0:06:29.530 ********* 2025-07-12 20:05:43.184419 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184424 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184428 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184433 | orchestrator | 2025-07-12 20:05:43.184437 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.184442 | orchestrator | Saturday 12 July 2025 20:01:17 +0000 (0:00:00.589) 0:06:30.119 ********* 2025-07-12 20:05:43.184446 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184451 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184456 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184460 | orchestrator | 2025-07-12 20:05:43.184465 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-12 20:05:43.184469 | orchestrator | Saturday 12 July 2025 20:01:18 +0000 (0:00:00.547) 0:06:30.666 ********* 2025-07-12 20:05:43.184474 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184478 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184483 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184487 | orchestrator | 2025-07-12 20:05:43.184492 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-12 20:05:43.184496 | orchestrator | Saturday 12 July 2025 20:01:18 +0000 (0:00:00.307) 0:06:30.973 ********* 2025-07-12 20:05:43.184501 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:05:43.184505 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:05:43.184513 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:05:43.184518 | orchestrator | 2025-07-12 20:05:43.184522 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-12 20:05:43.184527 | orchestrator | Saturday 12 July 2025 20:01:19 +0000 (0:00:00.859) 0:06:31.833 ********* 2025-07-12 20:05:43.184531 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.184536 | orchestrator | 2025-07-12 20:05:43.184540 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-12 20:05:43.184545 | orchestrator | Saturday 12 July 2025 20:01:20 +0000 (0:00:00.765) 0:06:32.598 ********* 2025-07-12 20:05:43.184550 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184554 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184559 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184563 | orchestrator | 2025-07-12 20:05:43.184568 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-12 20:05:43.184572 | orchestrator | Saturday 12 July 2025 20:01:20 +0000 (0:00:00.328) 0:06:32.926 ********* 2025-07-12 20:05:43.184577 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184581 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184586 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184590 | orchestrator | 2025-07-12 20:05:43.184595 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-12 20:05:43.184599 | orchestrator | Saturday 12 July 2025 20:01:20 +0000 (0:00:00.315) 0:06:33.242 ********* 2025-07-12 20:05:43.184604 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184608 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184613 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184617 | orchestrator | 2025-07-12 20:05:43.184622 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-12 20:05:43.184626 | orchestrator | Saturday 12 July 2025 20:01:21 +0000 (0:00:00.946) 0:06:34.189 ********* 2025-07-12 20:05:43.184631 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.184635 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.184640 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.184644 | orchestrator | 2025-07-12 20:05:43.184649 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-12 20:05:43.184654 | orchestrator | Saturday 12 July 2025 20:01:22 +0000 (0:00:00.332) 0:06:34.522 ********* 2025-07-12 20:05:43.184658 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 20:05:43.184663 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 20:05:43.184667 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 20:05:43.184675 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 20:05:43.184680 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 20:05:43.184684 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 20:05:43.184689 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 20:05:43.184697 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 20:05:43.184702 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 20:05:43.184707 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 20:05:43.184711 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 20:05:43.184716 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 20:05:43.184720 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 20:05:43.184728 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 20:05:43.184733 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 20:05:43.184738 | orchestrator | 2025-07-12 20:05:43.184742 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-12 20:05:43.184747 | orchestrator | Saturday 12 July 2025 20:01:25 +0000 (0:00:03.407) 0:06:37.929 ********* 2025-07-12 20:05:43.184751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.184756 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.184760 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.184765 | orchestrator | 2025-07-12 20:05:43.184769 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-12 20:05:43.184774 | orchestrator | Saturday 12 July 2025 20:01:25 +0000 (0:00:00.301) 0:06:38.231 ********* 2025-07-12 20:05:43.184778 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.184783 | orchestrator | 2025-07-12 20:05:43.184788 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-12 20:05:43.184792 | orchestrator | Saturday 12 July 2025 20:01:26 +0000 (0:00:00.815) 0:06:39.046 ********* 2025-07-12 20:05:43.184797 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 20:05:43.184801 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 20:05:43.184806 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 20:05:43.184810 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-12 20:05:43.184815 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-12 20:05:43.184820 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-12 20:05:43.184824 | orchestrator | 2025-07-12 20:05:43.184829 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-12 20:05:43.184833 | orchestrator | Saturday 12 July 2025 20:01:27 +0000 (0:00:01.038) 0:06:40.085 ********* 2025-07-12 20:05:43.184838 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.184842 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:05:43.184847 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:05:43.184851 | orchestrator | 2025-07-12 20:05:43.184856 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:05:43.184860 | orchestrator | Saturday 12 July 2025 20:01:29 +0000 (0:00:02.380) 0:06:42.465 ********* 2025-07-12 20:05:43.184865 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:05:43.184869 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:05:43.184874 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.184878 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:05:43.184883 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 20:05:43.184887 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.184892 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:05:43.184896 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 20:05:43.184901 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.184905 | orchestrator | 2025-07-12 20:05:43.184910 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-12 20:05:43.184914 | orchestrator | Saturday 12 July 2025 20:01:31 +0000 (0:00:01.420) 0:06:43.886 ********* 2025-07-12 20:05:43.184919 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.184923 | orchestrator | 2025-07-12 20:05:43.184928 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-12 20:05:43.184932 | orchestrator | Saturday 12 July 2025 20:01:33 +0000 (0:00:02.181) 0:06:46.068 ********* 2025-07-12 20:05:43.184937 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.184945 | orchestrator | 2025-07-12 20:05:43.184949 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-12 20:05:43.184954 | orchestrator | Saturday 12 July 2025 20:01:34 +0000 (0:00:00.548) 0:06:46.616 ********* 2025-07-12 20:05:43.184959 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-aa90e2bf-e75d-5c47-ae76-8a1384e00d58', 'data_vg': 'ceph-aa90e2bf-e75d-5c47-ae76-8a1384e00d58'}) 2025-07-12 20:05:43.184991 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-d5945923-5bd4-5f45-a4a9-07ddacb4606e', 'data_vg': 'ceph-d5945923-5bd4-5f45-a4a9-07ddacb4606e'}) 2025-07-12 20:05:43.184996 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a', 'data_vg': 'ceph-2d3a8e2a-8518-5d0a-afd8-96cafa5ccf1a'}) 2025-07-12 20:05:43.185004 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2f895b30-8de9-512a-b128-a5c9585d4791', 'data_vg': 'ceph-2f895b30-8de9-512a-b128-a5c9585d4791'}) 2025-07-12 20:05:43.185009 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-661525d0-45b6-5e60-bde8-1fec1e4af76b', 'data_vg': 'ceph-661525d0-45b6-5e60-bde8-1fec1e4af76b'}) 2025-07-12 20:05:43.185014 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-71032f38-677b-542f-825f-c43a6d71b028', 'data_vg': 'ceph-71032f38-677b-542f-825f-c43a6d71b028'}) 2025-07-12 20:05:43.185018 | orchestrator | 2025-07-12 20:05:43.185023 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-12 20:05:43.185028 | orchestrator | Saturday 12 July 2025 20:02:19 +0000 (0:00:45.119) 0:07:31.735 ********* 2025-07-12 20:05:43.185032 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185037 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185041 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185046 | orchestrator | 2025-07-12 20:05:43.185050 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-12 20:05:43.185055 | orchestrator | Saturday 12 July 2025 20:02:19 +0000 (0:00:00.555) 0:07:32.290 ********* 2025-07-12 20:05:43.185059 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.185064 | orchestrator | 2025-07-12 20:05:43.185068 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-12 20:05:43.185073 | orchestrator | Saturday 12 July 2025 20:02:20 +0000 (0:00:00.594) 0:07:32.884 ********* 2025-07-12 20:05:43.185077 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.185082 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.185087 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.185091 | orchestrator | 2025-07-12 20:05:43.185095 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-12 20:05:43.185100 | orchestrator | Saturday 12 July 2025 20:02:21 +0000 (0:00:00.708) 0:07:33.593 ********* 2025-07-12 20:05:43.185104 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.185109 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.185114 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.185118 | orchestrator | 2025-07-12 20:05:43.185123 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-12 20:05:43.185127 | orchestrator | Saturday 12 July 2025 20:02:23 +0000 (0:00:02.865) 0:07:36.458 ********* 2025-07-12 20:05:43.185132 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.185136 | orchestrator | 2025-07-12 20:05:43.185141 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-12 20:05:43.185145 | orchestrator | Saturday 12 July 2025 20:02:24 +0000 (0:00:00.640) 0:07:37.099 ********* 2025-07-12 20:05:43.185150 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.185154 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.185159 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.185163 | orchestrator | 2025-07-12 20:05:43.185168 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-12 20:05:43.185176 | orchestrator | Saturday 12 July 2025 20:02:25 +0000 (0:00:01.225) 0:07:38.324 ********* 2025-07-12 20:05:43.185181 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.185185 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.185190 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.185194 | orchestrator | 2025-07-12 20:05:43.185199 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-12 20:05:43.185203 | orchestrator | Saturday 12 July 2025 20:02:27 +0000 (0:00:01.427) 0:07:39.751 ********* 2025-07-12 20:05:43.185208 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.185212 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.185217 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.185221 | orchestrator | 2025-07-12 20:05:43.185226 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-12 20:05:43.185230 | orchestrator | Saturday 12 July 2025 20:02:28 +0000 (0:00:01.675) 0:07:41.427 ********* 2025-07-12 20:05:43.185235 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185240 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185244 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185248 | orchestrator | 2025-07-12 20:05:43.185253 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-12 20:05:43.185258 | orchestrator | Saturday 12 July 2025 20:02:29 +0000 (0:00:00.311) 0:07:41.739 ********* 2025-07-12 20:05:43.185262 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185267 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185271 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185275 | orchestrator | 2025-07-12 20:05:43.185280 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-12 20:05:43.185285 | orchestrator | Saturday 12 July 2025 20:02:29 +0000 (0:00:00.310) 0:07:42.049 ********* 2025-07-12 20:05:43.185289 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-07-12 20:05:43.185294 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-07-12 20:05:43.185298 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-07-12 20:05:43.185302 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:05:43.185307 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-12 20:05:43.185311 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-07-12 20:05:43.185316 | orchestrator | 2025-07-12 20:05:43.185320 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-12 20:05:43.185325 | orchestrator | Saturday 12 July 2025 20:02:30 +0000 (0:00:01.326) 0:07:43.376 ********* 2025-07-12 20:05:43.185329 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-12 20:05:43.185334 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 20:05:43.185342 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 20:05:43.185347 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-12 20:05:43.185351 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 20:05:43.185356 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-07-12 20:05:43.185360 | orchestrator | 2025-07-12 20:05:43.185365 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-12 20:05:43.185372 | orchestrator | Saturday 12 July 2025 20:02:33 +0000 (0:00:02.140) 0:07:45.516 ********* 2025-07-12 20:05:43.185377 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-12 20:05:43.185381 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 20:05:43.185386 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 20:05:43.185390 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-12 20:05:43.185395 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-07-12 20:05:43.185399 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-12 20:05:43.185404 | orchestrator | 2025-07-12 20:05:43.185408 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-12 20:05:43.185413 | orchestrator | Saturday 12 July 2025 20:02:37 +0000 (0:00:04.263) 0:07:49.779 ********* 2025-07-12 20:05:43.185421 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185426 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185430 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.185435 | orchestrator | 2025-07-12 20:05:43.185439 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-12 20:05:43.185444 | orchestrator | Saturday 12 July 2025 20:02:40 +0000 (0:00:03.014) 0:07:52.794 ********* 2025-07-12 20:05:43.185448 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185453 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185457 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-12 20:05:43.185462 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.185466 | orchestrator | 2025-07-12 20:05:43.185471 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-12 20:05:43.185475 | orchestrator | Saturday 12 July 2025 20:02:53 +0000 (0:00:13.401) 0:08:06.196 ********* 2025-07-12 20:05:43.185480 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185484 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185489 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185493 | orchestrator | 2025-07-12 20:05:43.185498 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:05:43.185502 | orchestrator | Saturday 12 July 2025 20:02:54 +0000 (0:00:01.150) 0:08:07.346 ********* 2025-07-12 20:05:43.185506 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185510 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185514 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185518 | orchestrator | 2025-07-12 20:05:43.185522 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 20:05:43.185526 | orchestrator | Saturday 12 July 2025 20:02:55 +0000 (0:00:00.730) 0:08:08.077 ********* 2025-07-12 20:05:43.185530 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.185534 | orchestrator | 2025-07-12 20:05:43.185538 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 20:05:43.185543 | orchestrator | Saturday 12 July 2025 20:02:56 +0000 (0:00:00.558) 0:08:08.635 ********* 2025-07-12 20:05:43.185547 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.185551 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.185555 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.185559 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185563 | orchestrator | 2025-07-12 20:05:43.185567 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 20:05:43.185571 | orchestrator | Saturday 12 July 2025 20:02:56 +0000 (0:00:00.392) 0:08:09.027 ********* 2025-07-12 20:05:43.185575 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185583 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185587 | orchestrator | 2025-07-12 20:05:43.185592 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 20:05:43.185596 | orchestrator | Saturday 12 July 2025 20:02:56 +0000 (0:00:00.331) 0:08:09.358 ********* 2025-07-12 20:05:43.185600 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185604 | orchestrator | 2025-07-12 20:05:43.185608 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 20:05:43.185612 | orchestrator | Saturday 12 July 2025 20:02:57 +0000 (0:00:00.225) 0:08:09.583 ********* 2025-07-12 20:05:43.185616 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185620 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185624 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185628 | orchestrator | 2025-07-12 20:05:43.185632 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 20:05:43.185639 | orchestrator | Saturday 12 July 2025 20:02:57 +0000 (0:00:00.566) 0:08:10.150 ********* 2025-07-12 20:05:43.185643 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185648 | orchestrator | 2025-07-12 20:05:43.185652 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 20:05:43.185656 | orchestrator | Saturday 12 July 2025 20:02:57 +0000 (0:00:00.215) 0:08:10.365 ********* 2025-07-12 20:05:43.185660 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185664 | orchestrator | 2025-07-12 20:05:43.185668 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 20:05:43.185672 | orchestrator | Saturday 12 July 2025 20:02:58 +0000 (0:00:00.220) 0:08:10.586 ********* 2025-07-12 20:05:43.185676 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185680 | orchestrator | 2025-07-12 20:05:43.185684 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 20:05:43.185691 | orchestrator | Saturday 12 July 2025 20:02:58 +0000 (0:00:00.136) 0:08:10.723 ********* 2025-07-12 20:05:43.185695 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185699 | orchestrator | 2025-07-12 20:05:43.185703 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 20:05:43.185707 | orchestrator | Saturday 12 July 2025 20:02:58 +0000 (0:00:00.225) 0:08:10.948 ********* 2025-07-12 20:05:43.185711 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185716 | orchestrator | 2025-07-12 20:05:43.185721 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 20:05:43.185726 | orchestrator | Saturday 12 July 2025 20:02:58 +0000 (0:00:00.223) 0:08:11.172 ********* 2025-07-12 20:05:43.185730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.185734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.185738 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.185742 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185746 | orchestrator | 2025-07-12 20:05:43.185751 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 20:05:43.185755 | orchestrator | Saturday 12 July 2025 20:02:59 +0000 (0:00:00.363) 0:08:11.535 ********* 2025-07-12 20:05:43.185759 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185763 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185767 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185771 | orchestrator | 2025-07-12 20:05:43.185775 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 20:05:43.185779 | orchestrator | Saturday 12 July 2025 20:02:59 +0000 (0:00:00.316) 0:08:11.851 ********* 2025-07-12 20:05:43.185783 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185787 | orchestrator | 2025-07-12 20:05:43.185791 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 20:05:43.185795 | orchestrator | Saturday 12 July 2025 20:03:00 +0000 (0:00:00.772) 0:08:12.624 ********* 2025-07-12 20:05:43.185799 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185803 | orchestrator | 2025-07-12 20:05:43.185808 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-12 20:05:43.185812 | orchestrator | 2025-07-12 20:05:43.185816 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.185820 | orchestrator | Saturday 12 July 2025 20:03:00 +0000 (0:00:00.654) 0:08:13.278 ********* 2025-07-12 20:05:43.185824 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.185829 | orchestrator | 2025-07-12 20:05:43.185833 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.185837 | orchestrator | Saturday 12 July 2025 20:03:01 +0000 (0:00:01.175) 0:08:14.454 ********* 2025-07-12 20:05:43.185841 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.185849 | orchestrator | 2025-07-12 20:05:43.185854 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.185858 | orchestrator | Saturday 12 July 2025 20:03:03 +0000 (0:00:01.210) 0:08:15.664 ********* 2025-07-12 20:05:43.185862 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.185866 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.185870 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.185874 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.185878 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.185882 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.185886 | orchestrator | 2025-07-12 20:05:43.185891 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.185895 | orchestrator | Saturday 12 July 2025 20:03:04 +0000 (0:00:00.837) 0:08:16.501 ********* 2025-07-12 20:05:43.185899 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.185903 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.185907 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.185911 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.185915 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.185919 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.185923 | orchestrator | 2025-07-12 20:05:43.185927 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.185931 | orchestrator | Saturday 12 July 2025 20:03:05 +0000 (0:00:01.023) 0:08:17.525 ********* 2025-07-12 20:05:43.185936 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.185940 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.185944 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.185948 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.185952 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.185956 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.185960 | orchestrator | 2025-07-12 20:05:43.185974 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.185979 | orchestrator | Saturday 12 July 2025 20:03:06 +0000 (0:00:01.242) 0:08:18.768 ********* 2025-07-12 20:05:43.185983 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.185987 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.185991 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.185995 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.185999 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186003 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186007 | orchestrator | 2025-07-12 20:05:43.186011 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.186029 | orchestrator | Saturday 12 July 2025 20:03:07 +0000 (0:00:01.131) 0:08:19.899 ********* 2025-07-12 20:05:43.186034 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186038 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186042 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186046 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186050 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186054 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186058 | orchestrator | 2025-07-12 20:05:43.186062 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.186069 | orchestrator | Saturday 12 July 2025 20:03:08 +0000 (0:00:00.996) 0:08:20.896 ********* 2025-07-12 20:05:43.186073 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186077 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186081 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186085 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186090 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186094 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186098 | orchestrator | 2025-07-12 20:05:43.186104 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.186112 | orchestrator | Saturday 12 July 2025 20:03:08 +0000 (0:00:00.588) 0:08:21.484 ********* 2025-07-12 20:05:43.186116 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186120 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186124 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186128 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186132 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186137 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186141 | orchestrator | 2025-07-12 20:05:43.186145 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.186149 | orchestrator | Saturday 12 July 2025 20:03:09 +0000 (0:00:00.810) 0:08:22.295 ********* 2025-07-12 20:05:43.186153 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186157 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186161 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186166 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186170 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186174 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186178 | orchestrator | 2025-07-12 20:05:43.186182 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.186186 | orchestrator | Saturday 12 July 2025 20:03:10 +0000 (0:00:01.068) 0:08:23.363 ********* 2025-07-12 20:05:43.186190 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186194 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186198 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186203 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186207 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186211 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186215 | orchestrator | 2025-07-12 20:05:43.186219 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.186223 | orchestrator | Saturday 12 July 2025 20:03:12 +0000 (0:00:01.226) 0:08:24.590 ********* 2025-07-12 20:05:43.186227 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186231 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186236 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186240 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186244 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186248 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186252 | orchestrator | 2025-07-12 20:05:43.186256 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.186260 | orchestrator | Saturday 12 July 2025 20:03:12 +0000 (0:00:00.594) 0:08:25.184 ********* 2025-07-12 20:05:43.186264 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186268 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186273 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186277 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186281 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186285 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186289 | orchestrator | 2025-07-12 20:05:43.186293 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.186297 | orchestrator | Saturday 12 July 2025 20:03:13 +0000 (0:00:00.777) 0:08:25.962 ********* 2025-07-12 20:05:43.186301 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186305 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186314 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186318 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186322 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186326 | orchestrator | 2025-07-12 20:05:43.186330 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.186334 | orchestrator | Saturday 12 July 2025 20:03:14 +0000 (0:00:00.619) 0:08:26.582 ********* 2025-07-12 20:05:43.186338 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186343 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186347 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186353 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186358 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186362 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186366 | orchestrator | 2025-07-12 20:05:43.186370 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.186374 | orchestrator | Saturday 12 July 2025 20:03:14 +0000 (0:00:00.825) 0:08:27.408 ********* 2025-07-12 20:05:43.186378 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186382 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186386 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186390 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186394 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186399 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186403 | orchestrator | 2025-07-12 20:05:43.186407 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.186411 | orchestrator | Saturday 12 July 2025 20:03:15 +0000 (0:00:00.643) 0:08:28.051 ********* 2025-07-12 20:05:43.186415 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186419 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186423 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186427 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186431 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186435 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186440 | orchestrator | 2025-07-12 20:05:43.186444 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.186448 | orchestrator | Saturday 12 July 2025 20:03:16 +0000 (0:00:00.824) 0:08:28.876 ********* 2025-07-12 20:05:43.186452 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:43.186456 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:43.186460 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:43.186464 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186468 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186472 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186476 | orchestrator | 2025-07-12 20:05:43.186483 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.186487 | orchestrator | Saturday 12 July 2025 20:03:16 +0000 (0:00:00.593) 0:08:29.470 ********* 2025-07-12 20:05:43.186491 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186496 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186500 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186504 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.186508 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.186512 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.186516 | orchestrator | 2025-07-12 20:05:43.186522 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.186527 | orchestrator | Saturday 12 July 2025 20:03:17 +0000 (0:00:00.789) 0:08:30.260 ********* 2025-07-12 20:05:43.186531 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186535 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186539 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186543 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186547 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186551 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186555 | orchestrator | 2025-07-12 20:05:43.186559 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.186564 | orchestrator | Saturday 12 July 2025 20:03:18 +0000 (0:00:00.683) 0:08:30.943 ********* 2025-07-12 20:05:43.186568 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186572 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186576 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186580 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186584 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186588 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186592 | orchestrator | 2025-07-12 20:05:43.186596 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-12 20:05:43.186605 | orchestrator | Saturday 12 July 2025 20:03:19 +0000 (0:00:01.244) 0:08:32.188 ********* 2025-07-12 20:05:43.186610 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.186614 | orchestrator | 2025-07-12 20:05:43.186618 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-12 20:05:43.186622 | orchestrator | Saturday 12 July 2025 20:03:23 +0000 (0:00:04.019) 0:08:36.207 ********* 2025-07-12 20:05:43.186626 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186630 | orchestrator | 2025-07-12 20:05:43.186635 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-12 20:05:43.186639 | orchestrator | Saturday 12 July 2025 20:03:25 +0000 (0:00:02.254) 0:08:38.461 ********* 2025-07-12 20:05:43.186643 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186647 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.186651 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.186655 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.186659 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.186663 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.186667 | orchestrator | 2025-07-12 20:05:43.186671 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-12 20:05:43.186676 | orchestrator | Saturday 12 July 2025 20:03:27 +0000 (0:00:01.960) 0:08:40.422 ********* 2025-07-12 20:05:43.186680 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.186684 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.186688 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.186692 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.186696 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.186700 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.186704 | orchestrator | 2025-07-12 20:05:43.186708 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-12 20:05:43.186713 | orchestrator | Saturday 12 July 2025 20:03:29 +0000 (0:00:01.250) 0:08:41.672 ********* 2025-07-12 20:05:43.186717 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.186722 | orchestrator | 2025-07-12 20:05:43.186726 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-12 20:05:43.186730 | orchestrator | Saturday 12 July 2025 20:03:30 +0000 (0:00:01.334) 0:08:43.007 ********* 2025-07-12 20:05:43.186734 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.186738 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.186742 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.186746 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.186750 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.186754 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.186758 | orchestrator | 2025-07-12 20:05:43.186762 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-12 20:05:43.186766 | orchestrator | Saturday 12 July 2025 20:03:32 +0000 (0:00:01.971) 0:08:44.979 ********* 2025-07-12 20:05:43.186771 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.186775 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.186779 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.186783 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.186787 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.186791 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.186795 | orchestrator | 2025-07-12 20:05:43.186799 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-12 20:05:43.186803 | orchestrator | Saturday 12 July 2025 20:03:35 +0000 (0:00:03.256) 0:08:48.235 ********* 2025-07-12 20:05:43.186808 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.186812 | orchestrator | 2025-07-12 20:05:43.186816 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-12 20:05:43.186824 | orchestrator | Saturday 12 July 2025 20:03:37 +0000 (0:00:01.265) 0:08:49.501 ********* 2025-07-12 20:05:43.186828 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186832 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186836 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186840 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186844 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186848 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186853 | orchestrator | 2025-07-12 20:05:43.186857 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-12 20:05:43.186863 | orchestrator | Saturday 12 July 2025 20:03:37 +0000 (0:00:00.829) 0:08:50.330 ********* 2025-07-12 20:05:43.186868 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:43.186872 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:43.186876 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:43.186880 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.186884 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.186888 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.186892 | orchestrator | 2025-07-12 20:05:43.186896 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-12 20:05:43.186902 | orchestrator | Saturday 12 July 2025 20:03:39 +0000 (0:00:02.153) 0:08:52.483 ********* 2025-07-12 20:05:43.186907 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:43.186911 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:43.186915 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:43.186919 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.186923 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.186927 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.186931 | orchestrator | 2025-07-12 20:05:43.186936 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-12 20:05:43.186940 | orchestrator | 2025-07-12 20:05:43.186944 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.186948 | orchestrator | Saturday 12 July 2025 20:03:41 +0000 (0:00:01.085) 0:08:53.568 ********* 2025-07-12 20:05:43.186952 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.186956 | orchestrator | 2025-07-12 20:05:43.186961 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.186987 | orchestrator | Saturday 12 July 2025 20:03:41 +0000 (0:00:00.502) 0:08:54.071 ********* 2025-07-12 20:05:43.186992 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.186996 | orchestrator | 2025-07-12 20:05:43.187000 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.187004 | orchestrator | Saturday 12 July 2025 20:03:42 +0000 (0:00:00.722) 0:08:54.794 ********* 2025-07-12 20:05:43.187008 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187013 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187017 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187021 | orchestrator | 2025-07-12 20:05:43.187025 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.187029 | orchestrator | Saturday 12 July 2025 20:03:42 +0000 (0:00:00.303) 0:08:55.098 ********* 2025-07-12 20:05:43.187033 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187037 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187041 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187045 | orchestrator | 2025-07-12 20:05:43.187049 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.187054 | orchestrator | Saturday 12 July 2025 20:03:43 +0000 (0:00:00.710) 0:08:55.808 ********* 2025-07-12 20:05:43.187058 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187062 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187066 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187070 | orchestrator | 2025-07-12 20:05:43.187078 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.187082 | orchestrator | Saturday 12 July 2025 20:03:44 +0000 (0:00:01.023) 0:08:56.832 ********* 2025-07-12 20:05:43.187086 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187090 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187094 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187098 | orchestrator | 2025-07-12 20:05:43.187102 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.187107 | orchestrator | Saturday 12 July 2025 20:03:45 +0000 (0:00:00.771) 0:08:57.603 ********* 2025-07-12 20:05:43.187111 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187115 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187119 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187123 | orchestrator | 2025-07-12 20:05:43.187127 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.187131 | orchestrator | Saturday 12 July 2025 20:03:45 +0000 (0:00:00.312) 0:08:57.916 ********* 2025-07-12 20:05:43.187135 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187139 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187143 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187148 | orchestrator | 2025-07-12 20:05:43.187152 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.187156 | orchestrator | Saturday 12 July 2025 20:03:45 +0000 (0:00:00.314) 0:08:58.231 ********* 2025-07-12 20:05:43.187160 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187164 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187168 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187172 | orchestrator | 2025-07-12 20:05:43.187176 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.187180 | orchestrator | Saturday 12 July 2025 20:03:46 +0000 (0:00:00.589) 0:08:58.820 ********* 2025-07-12 20:05:43.187184 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187188 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187193 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187197 | orchestrator | 2025-07-12 20:05:43.187201 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.187205 | orchestrator | Saturday 12 July 2025 20:03:47 +0000 (0:00:00.711) 0:08:59.532 ********* 2025-07-12 20:05:43.187209 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187213 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187217 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187221 | orchestrator | 2025-07-12 20:05:43.187226 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.187230 | orchestrator | Saturday 12 July 2025 20:03:47 +0000 (0:00:00.766) 0:09:00.298 ********* 2025-07-12 20:05:43.187234 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187238 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187242 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187246 | orchestrator | 2025-07-12 20:05:43.187250 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.187257 | orchestrator | Saturday 12 July 2025 20:03:48 +0000 (0:00:00.294) 0:09:00.593 ********* 2025-07-12 20:05:43.187261 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187266 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187270 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187274 | orchestrator | 2025-07-12 20:05:43.187278 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.187282 | orchestrator | Saturday 12 July 2025 20:03:48 +0000 (0:00:00.556) 0:09:01.149 ********* 2025-07-12 20:05:43.187286 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187292 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187297 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187301 | orchestrator | 2025-07-12 20:05:43.187305 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.187312 | orchestrator | Saturday 12 July 2025 20:03:49 +0000 (0:00:00.347) 0:09:01.497 ********* 2025-07-12 20:05:43.187316 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187321 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187325 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187329 | orchestrator | 2025-07-12 20:05:43.187333 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.187337 | orchestrator | Saturday 12 July 2025 20:03:49 +0000 (0:00:00.324) 0:09:01.821 ********* 2025-07-12 20:05:43.187341 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187345 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187349 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187353 | orchestrator | 2025-07-12 20:05:43.187358 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.187362 | orchestrator | Saturday 12 July 2025 20:03:49 +0000 (0:00:00.324) 0:09:02.145 ********* 2025-07-12 20:05:43.187366 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187374 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187378 | orchestrator | 2025-07-12 20:05:43.187382 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.187386 | orchestrator | Saturday 12 July 2025 20:03:50 +0000 (0:00:00.625) 0:09:02.771 ********* 2025-07-12 20:05:43.187390 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187394 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187398 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187403 | orchestrator | 2025-07-12 20:05:43.187407 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.187411 | orchestrator | Saturday 12 July 2025 20:03:50 +0000 (0:00:00.358) 0:09:03.130 ********* 2025-07-12 20:05:43.187415 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187419 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187423 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187427 | orchestrator | 2025-07-12 20:05:43.187431 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.187435 | orchestrator | Saturday 12 July 2025 20:03:50 +0000 (0:00:00.320) 0:09:03.450 ********* 2025-07-12 20:05:43.187440 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187444 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187448 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187452 | orchestrator | 2025-07-12 20:05:43.187456 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.187460 | orchestrator | Saturday 12 July 2025 20:03:51 +0000 (0:00:00.359) 0:09:03.810 ********* 2025-07-12 20:05:43.187464 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187468 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187472 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187476 | orchestrator | 2025-07-12 20:05:43.187480 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-12 20:05:43.187484 | orchestrator | Saturday 12 July 2025 20:03:52 +0000 (0:00:00.830) 0:09:04.641 ********* 2025-07-12 20:05:43.187489 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187493 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187497 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-12 20:05:43.187501 | orchestrator | 2025-07-12 20:05:43.187505 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-12 20:05:43.187509 | orchestrator | Saturday 12 July 2025 20:03:52 +0000 (0:00:00.295) 0:09:04.936 ********* 2025-07-12 20:05:43.187513 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.187518 | orchestrator | 2025-07-12 20:05:43.187521 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-12 20:05:43.187525 | orchestrator | Saturday 12 July 2025 20:03:54 +0000 (0:00:02.144) 0:09:07.081 ********* 2025-07-12 20:05:43.187530 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-12 20:05:43.187538 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187542 | orchestrator | 2025-07-12 20:05:43.187546 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-12 20:05:43.187550 | orchestrator | Saturday 12 July 2025 20:03:54 +0000 (0:00:00.201) 0:09:07.282 ********* 2025-07-12 20:05:43.187554 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:05:43.187562 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:05:43.187566 | orchestrator | 2025-07-12 20:05:43.187570 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-12 20:05:43.187576 | orchestrator | Saturday 12 July 2025 20:04:02 +0000 (0:00:07.932) 0:09:15.214 ********* 2025-07-12 20:05:43.187580 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:05:43.187583 | orchestrator | 2025-07-12 20:05:43.187587 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-12 20:05:43.187591 | orchestrator | Saturday 12 July 2025 20:04:06 +0000 (0:00:03.858) 0:09:19.073 ********* 2025-07-12 20:05:43.187596 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.187600 | orchestrator | 2025-07-12 20:05:43.187604 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-12 20:05:43.187608 | orchestrator | Saturday 12 July 2025 20:04:07 +0000 (0:00:00.509) 0:09:19.582 ********* 2025-07-12 20:05:43.187612 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 20:05:43.187616 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 20:05:43.187619 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 20:05:43.187623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-12 20:05:43.187627 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-12 20:05:43.187631 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-12 20:05:43.187634 | orchestrator | 2025-07-12 20:05:43.187638 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-12 20:05:43.187642 | orchestrator | Saturday 12 July 2025 20:04:08 +0000 (0:00:01.125) 0:09:20.707 ********* 2025-07-12 20:05:43.187646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.187649 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:05:43.187653 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:05:43.187657 | orchestrator | 2025-07-12 20:05:43.187660 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:05:43.187664 | orchestrator | Saturday 12 July 2025 20:04:11 +0000 (0:00:02.929) 0:09:23.637 ********* 2025-07-12 20:05:43.187668 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:05:43.187672 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:05:43.187675 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187679 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:05:43.187683 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 20:05:43.187687 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187690 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:05:43.187697 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 20:05:43.187701 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187705 | orchestrator | 2025-07-12 20:05:43.187708 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-12 20:05:43.187712 | orchestrator | Saturday 12 July 2025 20:04:12 +0000 (0:00:01.711) 0:09:25.348 ********* 2025-07-12 20:05:43.187716 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187720 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187723 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187727 | orchestrator | 2025-07-12 20:05:43.187731 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-12 20:05:43.187735 | orchestrator | Saturday 12 July 2025 20:04:15 +0000 (0:00:02.778) 0:09:28.126 ********* 2025-07-12 20:05:43.187738 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.187742 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.187746 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.187749 | orchestrator | 2025-07-12 20:05:43.187753 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-12 20:05:43.187757 | orchestrator | Saturday 12 July 2025 20:04:16 +0000 (0:00:00.402) 0:09:28.529 ********* 2025-07-12 20:05:43.187761 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.187765 | orchestrator | 2025-07-12 20:05:43.187768 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-12 20:05:43.187772 | orchestrator | Saturday 12 July 2025 20:04:16 +0000 (0:00:00.814) 0:09:29.343 ********* 2025-07-12 20:05:43.187776 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.187780 | orchestrator | 2025-07-12 20:05:43.187783 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-12 20:05:43.187787 | orchestrator | Saturday 12 July 2025 20:04:17 +0000 (0:00:00.653) 0:09:29.997 ********* 2025-07-12 20:05:43.187791 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187795 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187798 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187802 | orchestrator | 2025-07-12 20:05:43.187806 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-12 20:05:43.187809 | orchestrator | Saturday 12 July 2025 20:04:18 +0000 (0:00:01.321) 0:09:31.319 ********* 2025-07-12 20:05:43.187813 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187817 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187821 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187824 | orchestrator | 2025-07-12 20:05:43.187828 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-12 20:05:43.187832 | orchestrator | Saturday 12 July 2025 20:04:20 +0000 (0:00:01.593) 0:09:32.913 ********* 2025-07-12 20:05:43.187836 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187839 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187843 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187847 | orchestrator | 2025-07-12 20:05:43.187850 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-12 20:05:43.187854 | orchestrator | Saturday 12 July 2025 20:04:22 +0000 (0:00:01.809) 0:09:34.722 ********* 2025-07-12 20:05:43.187861 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187864 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187868 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187872 | orchestrator | 2025-07-12 20:05:43.187876 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-12 20:05:43.187879 | orchestrator | Saturday 12 July 2025 20:04:24 +0000 (0:00:01.955) 0:09:36.678 ********* 2025-07-12 20:05:43.187883 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187887 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187892 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187899 | orchestrator | 2025-07-12 20:05:43.187903 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:05:43.187907 | orchestrator | Saturday 12 July 2025 20:04:25 +0000 (0:00:01.470) 0:09:38.148 ********* 2025-07-12 20:05:43.187911 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187915 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187918 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187922 | orchestrator | 2025-07-12 20:05:43.187926 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 20:05:43.187930 | orchestrator | Saturday 12 July 2025 20:04:26 +0000 (0:00:00.658) 0:09:38.807 ********* 2025-07-12 20:05:43.187933 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.187937 | orchestrator | 2025-07-12 20:05:43.187941 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 20:05:43.187945 | orchestrator | Saturday 12 July 2025 20:04:27 +0000 (0:00:00.762) 0:09:39.569 ********* 2025-07-12 20:05:43.187948 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.187952 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.187956 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.187960 | orchestrator | 2025-07-12 20:05:43.187963 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 20:05:43.187978 | orchestrator | Saturday 12 July 2025 20:04:27 +0000 (0:00:00.362) 0:09:39.932 ********* 2025-07-12 20:05:43.187982 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.187985 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.187989 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.187993 | orchestrator | 2025-07-12 20:05:43.187996 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 20:05:43.188000 | orchestrator | Saturday 12 July 2025 20:04:28 +0000 (0:00:01.225) 0:09:41.158 ********* 2025-07-12 20:05:43.188004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.188008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.188011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.188015 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188019 | orchestrator | 2025-07-12 20:05:43.188022 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 20:05:43.188026 | orchestrator | Saturday 12 July 2025 20:04:29 +0000 (0:00:00.852) 0:09:42.010 ********* 2025-07-12 20:05:43.188030 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188034 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188037 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188041 | orchestrator | 2025-07-12 20:05:43.188045 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 20:05:43.188049 | orchestrator | 2025-07-12 20:05:43.188052 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:05:43.188056 | orchestrator | Saturday 12 July 2025 20:04:30 +0000 (0:00:00.804) 0:09:42.814 ********* 2025-07-12 20:05:43.188060 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.188064 | orchestrator | 2025-07-12 20:05:43.188067 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:05:43.188071 | orchestrator | Saturday 12 July 2025 20:04:30 +0000 (0:00:00.510) 0:09:43.325 ********* 2025-07-12 20:05:43.188075 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.188079 | orchestrator | 2025-07-12 20:05:43.188082 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:05:43.188086 | orchestrator | Saturday 12 July 2025 20:04:31 +0000 (0:00:00.721) 0:09:44.046 ********* 2025-07-12 20:05:43.188090 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188094 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188100 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188104 | orchestrator | 2025-07-12 20:05:43.188108 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:05:43.188112 | orchestrator | Saturday 12 July 2025 20:04:31 +0000 (0:00:00.321) 0:09:44.368 ********* 2025-07-12 20:05:43.188115 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188119 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188123 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188127 | orchestrator | 2025-07-12 20:05:43.188130 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:05:43.188134 | orchestrator | Saturday 12 July 2025 20:04:32 +0000 (0:00:00.716) 0:09:45.084 ********* 2025-07-12 20:05:43.188138 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188142 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188145 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188149 | orchestrator | 2025-07-12 20:05:43.188153 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:05:43.188157 | orchestrator | Saturday 12 July 2025 20:04:33 +0000 (0:00:00.713) 0:09:45.798 ********* 2025-07-12 20:05:43.188160 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188164 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188168 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188172 | orchestrator | 2025-07-12 20:05:43.188175 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:05:43.188179 | orchestrator | Saturday 12 July 2025 20:04:34 +0000 (0:00:00.991) 0:09:46.790 ********* 2025-07-12 20:05:43.188183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188187 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188193 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188197 | orchestrator | 2025-07-12 20:05:43.188200 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:05:43.188204 | orchestrator | Saturday 12 July 2025 20:04:34 +0000 (0:00:00.307) 0:09:47.097 ********* 2025-07-12 20:05:43.188208 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188212 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188216 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188219 | orchestrator | 2025-07-12 20:05:43.188225 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:05:43.188229 | orchestrator | Saturday 12 July 2025 20:04:34 +0000 (0:00:00.307) 0:09:47.405 ********* 2025-07-12 20:05:43.188233 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188237 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188240 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188244 | orchestrator | 2025-07-12 20:05:43.188248 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:05:43.188252 | orchestrator | Saturday 12 July 2025 20:04:35 +0000 (0:00:00.308) 0:09:47.713 ********* 2025-07-12 20:05:43.188255 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188259 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188263 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188267 | orchestrator | 2025-07-12 20:05:43.188271 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:05:43.188274 | orchestrator | Saturday 12 July 2025 20:04:36 +0000 (0:00:01.016) 0:09:48.730 ********* 2025-07-12 20:05:43.188278 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188282 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188286 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188289 | orchestrator | 2025-07-12 20:05:43.188293 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:05:43.188297 | orchestrator | Saturday 12 July 2025 20:04:36 +0000 (0:00:00.719) 0:09:49.450 ********* 2025-07-12 20:05:43.188301 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188305 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188308 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188312 | orchestrator | 2025-07-12 20:05:43.188318 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:05:43.188322 | orchestrator | Saturday 12 July 2025 20:04:37 +0000 (0:00:00.315) 0:09:49.765 ********* 2025-07-12 20:05:43.188326 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188330 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188334 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188337 | orchestrator | 2025-07-12 20:05:43.188341 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:05:43.188345 | orchestrator | Saturday 12 July 2025 20:04:37 +0000 (0:00:00.294) 0:09:50.059 ********* 2025-07-12 20:05:43.188349 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188352 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188356 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188360 | orchestrator | 2025-07-12 20:05:43.188364 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:05:43.188368 | orchestrator | Saturday 12 July 2025 20:04:38 +0000 (0:00:00.556) 0:09:50.616 ********* 2025-07-12 20:05:43.188371 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188375 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188379 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188383 | orchestrator | 2025-07-12 20:05:43.188386 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:05:43.188390 | orchestrator | Saturday 12 July 2025 20:04:38 +0000 (0:00:00.310) 0:09:50.926 ********* 2025-07-12 20:05:43.188394 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188398 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188401 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188405 | orchestrator | 2025-07-12 20:05:43.188409 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:05:43.188413 | orchestrator | Saturday 12 July 2025 20:04:38 +0000 (0:00:00.328) 0:09:51.255 ********* 2025-07-12 20:05:43.188417 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188420 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188424 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188428 | orchestrator | 2025-07-12 20:05:43.188432 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:05:43.188436 | orchestrator | Saturday 12 July 2025 20:04:39 +0000 (0:00:00.296) 0:09:51.551 ********* 2025-07-12 20:05:43.188439 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188443 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188447 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188451 | orchestrator | 2025-07-12 20:05:43.188454 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:05:43.188458 | orchestrator | Saturday 12 July 2025 20:04:39 +0000 (0:00:00.563) 0:09:52.115 ********* 2025-07-12 20:05:43.188462 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188466 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188469 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188473 | orchestrator | 2025-07-12 20:05:43.188477 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:05:43.188481 | orchestrator | Saturday 12 July 2025 20:04:39 +0000 (0:00:00.299) 0:09:52.414 ********* 2025-07-12 20:05:43.188485 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188488 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188492 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188496 | orchestrator | 2025-07-12 20:05:43.188500 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:05:43.188503 | orchestrator | Saturday 12 July 2025 20:04:40 +0000 (0:00:00.335) 0:09:52.750 ********* 2025-07-12 20:05:43.188507 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.188511 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.188515 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.188518 | orchestrator | 2025-07-12 20:05:43.188522 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-12 20:05:43.188526 | orchestrator | Saturday 12 July 2025 20:04:41 +0000 (0:00:00.757) 0:09:53.507 ********* 2025-07-12 20:05:43.188533 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.188537 | orchestrator | 2025-07-12 20:05:43.188543 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 20:05:43.188547 | orchestrator | Saturday 12 July 2025 20:04:41 +0000 (0:00:00.552) 0:09:54.060 ********* 2025-07-12 20:05:43.188550 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188554 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:05:43.188558 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:05:43.188562 | orchestrator | 2025-07-12 20:05:43.188567 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:05:43.188571 | orchestrator | Saturday 12 July 2025 20:04:43 +0000 (0:00:02.230) 0:09:56.290 ********* 2025-07-12 20:05:43.188575 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:05:43.188579 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 20:05:43.188582 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.188586 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:05:43.188590 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:05:43.188594 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.188597 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:05:43.188601 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 20:05:43.188605 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.188609 | orchestrator | 2025-07-12 20:05:43.188613 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-12 20:05:43.188616 | orchestrator | Saturday 12 July 2025 20:04:45 +0000 (0:00:01.465) 0:09:57.755 ********* 2025-07-12 20:05:43.188620 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188624 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188628 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188631 | orchestrator | 2025-07-12 20:05:43.188635 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-12 20:05:43.188639 | orchestrator | Saturday 12 July 2025 20:04:45 +0000 (0:00:00.310) 0:09:58.066 ********* 2025-07-12 20:05:43.188643 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.188646 | orchestrator | 2025-07-12 20:05:43.188650 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-12 20:05:43.188654 | orchestrator | Saturday 12 July 2025 20:04:46 +0000 (0:00:00.539) 0:09:58.605 ********* 2025-07-12 20:05:43.188658 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.188661 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.188665 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.188669 | orchestrator | 2025-07-12 20:05:43.188673 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-12 20:05:43.188677 | orchestrator | Saturday 12 July 2025 20:04:47 +0000 (0:00:01.112) 0:09:59.717 ********* 2025-07-12 20:05:43.188680 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188684 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 20:05:43.188688 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188692 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 20:05:43.188698 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188702 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 20:05:43.188706 | orchestrator | 2025-07-12 20:05:43.188710 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 20:05:43.188714 | orchestrator | Saturday 12 July 2025 20:04:51 +0000 (0:00:04.550) 0:10:04.268 ********* 2025-07-12 20:05:43.188717 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188721 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:05:43.188725 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188728 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:05:43.188732 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:05:43.188736 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:05:43.188740 | orchestrator | 2025-07-12 20:05:43.188743 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:05:43.188747 | orchestrator | Saturday 12 July 2025 20:04:53 +0000 (0:00:02.220) 0:10:06.489 ********* 2025-07-12 20:05:43.188751 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:05:43.188755 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.188758 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:05:43.188762 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.188766 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:05:43.188770 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.188773 | orchestrator | 2025-07-12 20:05:43.188780 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-12 20:05:43.188784 | orchestrator | Saturday 12 July 2025 20:04:55 +0000 (0:00:01.234) 0:10:07.724 ********* 2025-07-12 20:05:43.188788 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-12 20:05:43.188791 | orchestrator | 2025-07-12 20:05:43.188795 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-12 20:05:43.188800 | orchestrator | Saturday 12 July 2025 20:04:55 +0000 (0:00:00.242) 0:10:07.966 ********* 2025-07-12 20:05:43.188805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188824 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188828 | orchestrator | 2025-07-12 20:05:43.188831 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-12 20:05:43.188835 | orchestrator | Saturday 12 July 2025 20:04:56 +0000 (0:00:01.039) 0:10:09.005 ********* 2025-07-12 20:05:43.188839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:05:43.188861 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188865 | orchestrator | 2025-07-12 20:05:43.188868 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-12 20:05:43.188872 | orchestrator | Saturday 12 July 2025 20:04:57 +0000 (0:00:01.258) 0:10:10.264 ********* 2025-07-12 20:05:43.188876 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:05:43.188880 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:05:43.188884 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:05:43.188887 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:05:43.188891 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:05:43.188895 | orchestrator | 2025-07-12 20:05:43.188899 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-12 20:05:43.188903 | orchestrator | Saturday 12 July 2025 20:05:28 +0000 (0:00:31.108) 0:10:41.373 ********* 2025-07-12 20:05:43.188906 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188910 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188914 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188918 | orchestrator | 2025-07-12 20:05:43.188922 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-12 20:05:43.188925 | orchestrator | Saturday 12 July 2025 20:05:29 +0000 (0:00:00.322) 0:10:41.695 ********* 2025-07-12 20:05:43.188929 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.188933 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.188937 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.188940 | orchestrator | 2025-07-12 20:05:43.188944 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-12 20:05:43.188948 | orchestrator | Saturday 12 July 2025 20:05:29 +0000 (0:00:00.300) 0:10:41.995 ********* 2025-07-12 20:05:43.188952 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.188955 | orchestrator | 2025-07-12 20:05:43.188959 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-12 20:05:43.188963 | orchestrator | Saturday 12 July 2025 20:05:30 +0000 (0:00:00.756) 0:10:42.752 ********* 2025-07-12 20:05:43.188976 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.188980 | orchestrator | 2025-07-12 20:05:43.188986 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-12 20:05:43.188990 | orchestrator | Saturday 12 July 2025 20:05:30 +0000 (0:00:00.543) 0:10:43.295 ********* 2025-07-12 20:05:43.188994 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.188997 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.189001 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.189005 | orchestrator | 2025-07-12 20:05:43.189010 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-12 20:05:43.189014 | orchestrator | Saturday 12 July 2025 20:05:32 +0000 (0:00:01.220) 0:10:44.516 ********* 2025-07-12 20:05:43.189021 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.189025 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.189028 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.189032 | orchestrator | 2025-07-12 20:05:43.189036 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-12 20:05:43.189040 | orchestrator | Saturday 12 July 2025 20:05:33 +0000 (0:00:01.438) 0:10:45.954 ********* 2025-07-12 20:05:43.189044 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:43.189047 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:43.189051 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:43.189055 | orchestrator | 2025-07-12 20:05:43.189059 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-12 20:05:43.189062 | orchestrator | Saturday 12 July 2025 20:05:35 +0000 (0:00:01.892) 0:10:47.846 ********* 2025-07-12 20:05:43.189066 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.189070 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.189074 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:05:43.189078 | orchestrator | 2025-07-12 20:05:43.189082 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:05:43.189085 | orchestrator | Saturday 12 July 2025 20:05:38 +0000 (0:00:02.701) 0:10:50.548 ********* 2025-07-12 20:05:43.189089 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.189093 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.189097 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.189100 | orchestrator | 2025-07-12 20:05:43.189104 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 20:05:43.189108 | orchestrator | Saturday 12 July 2025 20:05:38 +0000 (0:00:00.351) 0:10:50.900 ********* 2025-07-12 20:05:43.189112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:05:43.189115 | orchestrator | 2025-07-12 20:05:43.189119 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 20:05:43.189123 | orchestrator | Saturday 12 July 2025 20:05:38 +0000 (0:00:00.511) 0:10:51.411 ********* 2025-07-12 20:05:43.189127 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.189130 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.189134 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.189138 | orchestrator | 2025-07-12 20:05:43.189142 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 20:05:43.189145 | orchestrator | Saturday 12 July 2025 20:05:39 +0000 (0:00:00.573) 0:10:51.985 ********* 2025-07-12 20:05:43.189149 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.189153 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:43.189157 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:43.189161 | orchestrator | 2025-07-12 20:05:43.189164 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 20:05:43.189168 | orchestrator | Saturday 12 July 2025 20:05:39 +0000 (0:00:00.330) 0:10:52.316 ********* 2025-07-12 20:05:43.189172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:05:43.189176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:05:43.189179 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:05:43.189183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:43.189187 | orchestrator | 2025-07-12 20:05:43.189191 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 20:05:43.189194 | orchestrator | Saturday 12 July 2025 20:05:40 +0000 (0:00:00.586) 0:10:52.902 ********* 2025-07-12 20:05:43.189198 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:43.189206 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:43.189210 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:43.189214 | orchestrator | 2025-07-12 20:05:43.189217 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:05:43.189221 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-07-12 20:05:43.189225 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-12 20:05:43.189229 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-12 20:05:43.189233 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-07-12 20:05:43.189237 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-12 20:05:43.189243 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-12 20:05:43.189247 | orchestrator | 2025-07-12 20:05:43.189250 | orchestrator | 2025-07-12 20:05:43.189254 | orchestrator | 2025-07-12 20:05:43.189258 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:05:43.189262 | orchestrator | Saturday 12 July 2025 20:05:40 +0000 (0:00:00.237) 0:10:53.139 ********* 2025-07-12 20:05:43.189267 | orchestrator | =============================================================================== 2025-07-12 20:05:43.189271 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 64.60s 2025-07-12 20:05:43.189275 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 45.12s 2025-07-12 20:05:43.189279 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.37s 2025-07-12 20:05:43.189283 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.11s 2025-07-12 20:05:43.189286 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.40s 2025-07-12 20:05:43.189290 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.40s 2025-07-12 20:05:43.189294 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.31s 2025-07-12 20:05:43.189298 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.32s 2025-07-12 20:05:43.189302 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.93s 2025-07-12 20:05:43.189305 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.68s 2025-07-12 20:05:43.189309 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.31s 2025-07-12 20:05:43.189313 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.77s 2025-07-12 20:05:43.189317 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.55s 2025-07-12 20:05:43.189320 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.26s 2025-07-12 20:05:43.189324 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.02s 2025-07-12 20:05:43.189328 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.86s 2025-07-12 20:05:43.189332 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.65s 2025-07-12 20:05:43.189335 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.41s 2025-07-12 20:05:43.189339 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.26s 2025-07-12 20:05:43.189343 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.20s 2025-07-12 20:05:43.189347 | orchestrator | 2025-07-12 20:05:43 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:43.189353 | orchestrator | 2025-07-12 20:05:43 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:43.189357 | orchestrator | 2025-07-12 20:05:43 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:05:43.189361 | orchestrator | 2025-07-12 20:05:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:46.229539 | orchestrator | 2025-07-12 20:05:46 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:46.230201 | orchestrator | 2025-07-12 20:05:46 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:46.230948 | orchestrator | 2025-07-12 20:05:46 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:05:46.231030 | orchestrator | 2025-07-12 20:05:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:49.269615 | orchestrator | 2025-07-12 20:05:49 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:49.271262 | orchestrator | 2025-07-12 20:05:49 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:49.271296 | orchestrator | 2025-07-12 20:05:49 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:05:49.271526 | orchestrator | 2025-07-12 20:05:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:52.326306 | orchestrator | 2025-07-12 20:05:52 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:52.327005 | orchestrator | 2025-07-12 20:05:52 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:52.328836 | orchestrator | 2025-07-12 20:05:52 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:05:52.328913 | orchestrator | 2025-07-12 20:05:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:55.388571 | orchestrator | 2025-07-12 20:05:55 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:55.389220 | orchestrator | 2025-07-12 20:05:55 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:55.389276 | orchestrator | 2025-07-12 20:05:55 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:05:55.389416 | orchestrator | 2025-07-12 20:05:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:05:58.437512 | orchestrator | 2025-07-12 20:05:58 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:05:58.442642 | orchestrator | 2025-07-12 20:05:58 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:05:58.443197 | orchestrator | 2025-07-12 20:05:58 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:05:58.443718 | orchestrator | 2025-07-12 20:05:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:01.496157 | orchestrator | 2025-07-12 20:06:01 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:01.499005 | orchestrator | 2025-07-12 20:06:01 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:01.502727 | orchestrator | 2025-07-12 20:06:01 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:01.503209 | orchestrator | 2025-07-12 20:06:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:04.538906 | orchestrator | 2025-07-12 20:06:04 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:04.540443 | orchestrator | 2025-07-12 20:06:04 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:04.542158 | orchestrator | 2025-07-12 20:06:04 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:04.542305 | orchestrator | 2025-07-12 20:06:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:07.586424 | orchestrator | 2025-07-12 20:06:07 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:07.587045 | orchestrator | 2025-07-12 20:06:07 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:07.587351 | orchestrator | 2025-07-12 20:06:07 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:07.587387 | orchestrator | 2025-07-12 20:06:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:10.635346 | orchestrator | 2025-07-12 20:06:10 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:10.637725 | orchestrator | 2025-07-12 20:06:10 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:10.640629 | orchestrator | 2025-07-12 20:06:10 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:10.640658 | orchestrator | 2025-07-12 20:06:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:13.689492 | orchestrator | 2025-07-12 20:06:13 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:13.691929 | orchestrator | 2025-07-12 20:06:13 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:13.693057 | orchestrator | 2025-07-12 20:06:13 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:13.693100 | orchestrator | 2025-07-12 20:06:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:16.744509 | orchestrator | 2025-07-12 20:06:16 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:16.745479 | orchestrator | 2025-07-12 20:06:16 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:16.746467 | orchestrator | 2025-07-12 20:06:16 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:16.746490 | orchestrator | 2025-07-12 20:06:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:19.788571 | orchestrator | 2025-07-12 20:06:19 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:19.791113 | orchestrator | 2025-07-12 20:06:19 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:19.793095 | orchestrator | 2025-07-12 20:06:19 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:19.793445 | orchestrator | 2025-07-12 20:06:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:22.841687 | orchestrator | 2025-07-12 20:06:22 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:22.841941 | orchestrator | 2025-07-12 20:06:22 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:22.842806 | orchestrator | 2025-07-12 20:06:22 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:22.842821 | orchestrator | 2025-07-12 20:06:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:25.898147 | orchestrator | 2025-07-12 20:06:25 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:25.899725 | orchestrator | 2025-07-12 20:06:25 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:25.902281 | orchestrator | 2025-07-12 20:06:25 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:25.902678 | orchestrator | 2025-07-12 20:06:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:28.949481 | orchestrator | 2025-07-12 20:06:28 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:28.952112 | orchestrator | 2025-07-12 20:06:28 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:28.953793 | orchestrator | 2025-07-12 20:06:28 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:28.953931 | orchestrator | 2025-07-12 20:06:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:31.992446 | orchestrator | 2025-07-12 20:06:31 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:31.993913 | orchestrator | 2025-07-12 20:06:31 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:31.995126 | orchestrator | 2025-07-12 20:06:31 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:31.995413 | orchestrator | 2025-07-12 20:06:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:35.036308 | orchestrator | 2025-07-12 20:06:35 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:35.038219 | orchestrator | 2025-07-12 20:06:35 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:35.039552 | orchestrator | 2025-07-12 20:06:35 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:35.039584 | orchestrator | 2025-07-12 20:06:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:38.083993 | orchestrator | 2025-07-12 20:06:38 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:38.085518 | orchestrator | 2025-07-12 20:06:38 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:38.087862 | orchestrator | 2025-07-12 20:06:38 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:38.087943 | orchestrator | 2025-07-12 20:06:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:41.128125 | orchestrator | 2025-07-12 20:06:41 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:41.129322 | orchestrator | 2025-07-12 20:06:41 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state STARTED 2025-07-12 20:06:41.130753 | orchestrator | 2025-07-12 20:06:41 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:41.130784 | orchestrator | 2025-07-12 20:06:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:44.183940 | orchestrator | 2025-07-12 20:06:44 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:44.187902 | orchestrator | 2025-07-12 20:06:44 | INFO  | Task 7aacc1d6-d8fb-4fc6-a8c1-7a630a202f2e is in state SUCCESS 2025-07-12 20:06:44.190791 | orchestrator | 2025-07-12 20:06:44.191137 | orchestrator | 2025-07-12 20:06:44.191174 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:06:44.191218 | orchestrator | 2025-07-12 20:06:44.191429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:06:44.191481 | orchestrator | Saturday 12 July 2025 20:03:44 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-07-12 20:06:44.191502 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:44.191521 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:44.191541 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:44.191594 | orchestrator | 2025-07-12 20:06:44.191614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:06:44.191635 | orchestrator | Saturday 12 July 2025 20:03:44 +0000 (0:00:00.309) 0:00:00.582 ********* 2025-07-12 20:06:44.191654 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-12 20:06:44.191673 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-12 20:06:44.191693 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-12 20:06:44.191711 | orchestrator | 2025-07-12 20:06:44.191728 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-12 20:06:44.191740 | orchestrator | 2025-07-12 20:06:44.191751 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:06:44.191762 | orchestrator | Saturday 12 July 2025 20:03:45 +0000 (0:00:00.428) 0:00:01.011 ********* 2025-07-12 20:06:44.191774 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:06:44.191785 | orchestrator | 2025-07-12 20:06:44.191796 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-12 20:06:44.191807 | orchestrator | Saturday 12 July 2025 20:03:45 +0000 (0:00:00.547) 0:00:01.558 ********* 2025-07-12 20:06:44.191818 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 20:06:44.191829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 20:06:44.191840 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 20:06:44.191851 | orchestrator | 2025-07-12 20:06:44.191862 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-12 20:06:44.191873 | orchestrator | Saturday 12 July 2025 20:03:46 +0000 (0:00:00.786) 0:00:02.345 ********* 2025-07-12 20:06:44.191939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.192090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.192156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.192192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.192207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.192221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.192233 | orchestrator | 2025-07-12 20:06:44.192245 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:06:44.192256 | orchestrator | Saturday 12 July 2025 20:03:48 +0000 (0:00:01.669) 0:00:04.014 ********* 2025-07-12 20:06:44.192267 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:06:44.192285 | orchestrator | 2025-07-12 20:06:44.192296 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-12 20:06:44.192307 | orchestrator | Saturday 12 July 2025 20:03:48 +0000 (0:00:00.493) 0:00:04.507 ********* 2025-07-12 20:06:44.192329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.192348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.192361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.192373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.192392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.192417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.192429 | orchestrator | 2025-07-12 20:06:44.192440 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-12 20:06:44.192452 | orchestrator | Saturday 12 July 2025 20:03:51 +0000 (0:00:02.821) 0:00:07.329 ********* 2025-07-12 20:06:44.192465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:06:44.192477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:06:44.192495 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:44.192507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:06:44.192532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:06:44.192545 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:44.192557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:06:44.192569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:06:44.193308 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:44.193346 | orchestrator | 2025-07-12 20:06:44.193358 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-12 20:06:44.193369 | orchestrator | Saturday 12 July 2025 20:03:52 +0000 (0:00:01.275) 0:00:08.605 ********* 2025-07-12 20:06:44.193381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:06:44.193411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:06:44.193425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:44.193436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:06:44.193449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:06:44.193472 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:44.193493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:06:44.193524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:06:44.193544 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:44.193562 | orchestrator | 2025-07-12 20:06:44.193581 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-12 20:06:44.193600 | orchestrator | Saturday 12 July 2025 20:03:53 +0000 (0:00:00.628) 0:00:09.233 ********* 2025-07-12 20:06:44.193638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.193653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.193673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.193694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.193713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.193736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.193765 | orchestrator | 2025-07-12 20:06:44.193783 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-12 20:06:44.194195 | orchestrator | Saturday 12 July 2025 20:03:55 +0000 (0:00:02.268) 0:00:11.502 ********* 2025-07-12 20:06:44.194221 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:44.194235 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:44.194246 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:44.194257 | orchestrator | 2025-07-12 20:06:44.194268 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-12 20:06:44.194279 | orchestrator | Saturday 12 July 2025 20:03:58 +0000 (0:00:02.789) 0:00:14.292 ********* 2025-07-12 20:06:44.194290 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:44.194301 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:44.194312 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:44.194323 | orchestrator | 2025-07-12 20:06:44.194334 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-12 20:06:44.194345 | orchestrator | Saturday 12 July 2025 20:03:59 +0000 (0:00:01.414) 0:00:15.707 ********* 2025-07-12 20:06:44.194356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.194382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.194403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:06:44.194416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.194439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.194460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:06:44.194472 | orchestrator | 2025-07-12 20:06:44.194484 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:06:44.194495 | orchestrator | Saturday 12 July 2025 20:04:01 +0000 (0:00:01.995) 0:00:17.702 ********* 2025-07-12 20:06:44.194506 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:44.194517 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:44.194528 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:44.194538 | orchestrator | 2025-07-12 20:06:44.194554 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 20:06:44.194565 | orchestrator | Saturday 12 July 2025 20:04:02 +0000 (0:00:00.295) 0:00:17.998 ********* 2025-07-12 20:06:44.194576 | orchestrator | 2025-07-12 20:06:44.194587 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 20:06:44.194599 | orchestrator | Saturday 12 July 2025 20:04:02 +0000 (0:00:00.064) 0:00:18.062 ********* 2025-07-12 20:06:44.194611 | orchestrator | 2025-07-12 20:06:44.194622 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 20:06:44.194647 | orchestrator | Saturday 12 July 2025 20:04:02 +0000 (0:00:00.096) 0:00:18.159 ********* 2025-07-12 20:06:44.194666 | orchestrator | 2025-07-12 20:06:44.194736 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-12 20:06:44.194756 | orchestrator | Saturday 12 July 2025 20:04:02 +0000 (0:00:00.247) 0:00:18.406 ********* 2025-07-12 20:06:44.194767 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:44.194778 | orchestrator | 2025-07-12 20:06:44.194788 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-12 20:06:44.194799 | orchestrator | Saturday 12 July 2025 20:04:02 +0000 (0:00:00.224) 0:00:18.630 ********* 2025-07-12 20:06:44.194810 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:44.194821 | orchestrator | 2025-07-12 20:06:44.194832 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-12 20:06:44.194842 | orchestrator | Saturday 12 July 2025 20:04:03 +0000 (0:00:00.206) 0:00:18.836 ********* 2025-07-12 20:06:44.194853 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:44.194864 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:44.194893 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:44.194906 | orchestrator | 2025-07-12 20:06:44.194917 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-12 20:06:44.194928 | orchestrator | Saturday 12 July 2025 20:05:09 +0000 (0:01:06.500) 0:01:25.337 ********* 2025-07-12 20:06:44.194939 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:44.194950 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:44.194961 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:44.194972 | orchestrator | 2025-07-12 20:06:44.194983 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:06:44.194994 | orchestrator | Saturday 12 July 2025 20:06:32 +0000 (0:01:22.607) 0:02:47.945 ********* 2025-07-12 20:06:44.195039 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:06:44.195060 | orchestrator | 2025-07-12 20:06:44.195079 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-12 20:06:44.195097 | orchestrator | Saturday 12 July 2025 20:06:32 +0000 (0:00:00.566) 0:02:48.511 ********* 2025-07-12 20:06:44.195108 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:44.195119 | orchestrator | 2025-07-12 20:06:44.195130 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-12 20:06:44.195141 | orchestrator | Saturday 12 July 2025 20:06:35 +0000 (0:00:02.330) 0:02:50.841 ********* 2025-07-12 20:06:44.195152 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:44.195163 | orchestrator | 2025-07-12 20:06:44.195174 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-12 20:06:44.195203 | orchestrator | Saturday 12 July 2025 20:06:37 +0000 (0:00:02.179) 0:02:53.021 ********* 2025-07-12 20:06:44.195215 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:44.195226 | orchestrator | 2025-07-12 20:06:44.195237 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-12 20:06:44.195248 | orchestrator | Saturday 12 July 2025 20:06:40 +0000 (0:00:02.887) 0:02:55.908 ********* 2025-07-12 20:06:44.195259 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:44.195269 | orchestrator | 2025-07-12 20:06:44.195280 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:06:44.195293 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:06:44.195305 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:06:44.195316 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:06:44.195327 | orchestrator | 2025-07-12 20:06:44.195351 | orchestrator | 2025-07-12 20:06:44.195362 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:06:44.195382 | orchestrator | Saturday 12 July 2025 20:06:42 +0000 (0:00:02.666) 0:02:58.574 ********* 2025-07-12 20:06:44.195393 | orchestrator | =============================================================================== 2025-07-12 20:06:44.195405 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.61s 2025-07-12 20:06:44.195415 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.50s 2025-07-12 20:06:44.195426 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.89s 2025-07-12 20:06:44.195437 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.82s 2025-07-12 20:06:44.195457 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.79s 2025-07-12 20:06:44.195474 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.67s 2025-07-12 20:06:44.195493 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.33s 2025-07-12 20:06:44.195511 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.27s 2025-07-12 20:06:44.195528 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.18s 2025-07-12 20:06:44.195554 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.00s 2025-07-12 20:06:44.195573 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.67s 2025-07-12 20:06:44.195591 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.41s 2025-07-12 20:06:44.195609 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.28s 2025-07-12 20:06:44.195630 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.79s 2025-07-12 20:06:44.195647 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.63s 2025-07-12 20:06:44.195667 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2025-07-12 20:06:44.195685 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-07-12 20:06:44.195704 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2025-07-12 20:06:44.195723 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-07-12 20:06:44.195741 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.41s 2025-07-12 20:06:44.195755 | orchestrator | 2025-07-12 20:06:44 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:44.195766 | orchestrator | 2025-07-12 20:06:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:47.240286 | orchestrator | 2025-07-12 20:06:47 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:47.241306 | orchestrator | 2025-07-12 20:06:47 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:47.241332 | orchestrator | 2025-07-12 20:06:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:50.280586 | orchestrator | 2025-07-12 20:06:50 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state STARTED 2025-07-12 20:06:50.284323 | orchestrator | 2025-07-12 20:06:50 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:50.284369 | orchestrator | 2025-07-12 20:06:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:53.332464 | orchestrator | 2025-07-12 20:06:53 | INFO  | Task de68eb3e-c509-421f-8c41-08ce486ace86 is in state SUCCESS 2025-07-12 20:06:53.335069 | orchestrator | 2025-07-12 20:06:53.335119 | orchestrator | 2025-07-12 20:06:53.335133 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-12 20:06:53.335146 | orchestrator | 2025-07-12 20:06:53.335158 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 20:06:53.335197 | orchestrator | Saturday 12 July 2025 20:03:44 +0000 (0:00:00.102) 0:00:00.102 ********* 2025-07-12 20:06:53.335209 | orchestrator | ok: [localhost] => { 2025-07-12 20:06:53.335223 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-12 20:06:53.335234 | orchestrator | } 2025-07-12 20:06:53.335246 | orchestrator | 2025-07-12 20:06:53.335258 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-12 20:06:53.335269 | orchestrator | Saturday 12 July 2025 20:03:44 +0000 (0:00:00.046) 0:00:00.149 ********* 2025-07-12 20:06:53.335281 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-12 20:06:53.335294 | orchestrator | ...ignoring 2025-07-12 20:06:53.335306 | orchestrator | 2025-07-12 20:06:53.335317 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-12 20:06:53.335328 | orchestrator | Saturday 12 July 2025 20:03:47 +0000 (0:00:02.803) 0:00:02.953 ********* 2025-07-12 20:06:53.335340 | orchestrator | skipping: [localhost] 2025-07-12 20:06:53.335351 | orchestrator | 2025-07-12 20:06:53.335363 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-12 20:06:53.335374 | orchestrator | Saturday 12 July 2025 20:03:47 +0000 (0:00:00.057) 0:00:03.011 ********* 2025-07-12 20:06:53.335385 | orchestrator | ok: [localhost] 2025-07-12 20:06:53.335396 | orchestrator | 2025-07-12 20:06:53.335409 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:06:53.335420 | orchestrator | 2025-07-12 20:06:53.335431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:06:53.335442 | orchestrator | Saturday 12 July 2025 20:03:47 +0000 (0:00:00.150) 0:00:03.162 ********* 2025-07-12 20:06:53.335453 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.335465 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.335476 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.335487 | orchestrator | 2025-07-12 20:06:53.335498 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:06:53.335509 | orchestrator | Saturday 12 July 2025 20:03:47 +0000 (0:00:00.302) 0:00:03.464 ********* 2025-07-12 20:06:53.335520 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 20:06:53.335568 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 20:06:53.335579 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 20:06:53.335590 | orchestrator | 2025-07-12 20:06:53.335601 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 20:06:53.335612 | orchestrator | 2025-07-12 20:06:53.335623 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 20:06:53.335635 | orchestrator | Saturday 12 July 2025 20:03:48 +0000 (0:00:00.686) 0:00:04.151 ********* 2025-07-12 20:06:53.335647 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:06:53.335659 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:06:53.335672 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:06:53.335684 | orchestrator | 2025-07-12 20:06:53.335711 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:06:53.335724 | orchestrator | Saturday 12 July 2025 20:03:48 +0000 (0:00:00.348) 0:00:04.499 ********* 2025-07-12 20:06:53.335736 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:06:53.335750 | orchestrator | 2025-07-12 20:06:53.335762 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-12 20:06:53.335774 | orchestrator | Saturday 12 July 2025 20:03:49 +0000 (0:00:00.687) 0:00:05.187 ********* 2025-07-12 20:06:53.335812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.335841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.335862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.335882 | orchestrator | 2025-07-12 20:06:53.335902 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-12 20:06:53.335916 | orchestrator | Saturday 12 July 2025 20:03:52 +0000 (0:00:03.504) 0:00:08.691 ********* 2025-07-12 20:06:53.335928 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.335941 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.335954 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.335966 | orchestrator | 2025-07-12 20:06:53.335979 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-12 20:06:53.335991 | orchestrator | Saturday 12 July 2025 20:03:53 +0000 (0:00:00.640) 0:00:09.332 ********* 2025-07-12 20:06:53.336033 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.336044 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.336055 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.336066 | orchestrator | 2025-07-12 20:06:53.336077 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-12 20:06:53.336088 | orchestrator | Saturday 12 July 2025 20:03:55 +0000 (0:00:01.400) 0:00:10.732 ********* 2025-07-12 20:06:53.336105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.336132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.336146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.336158 | orchestrator | 2025-07-12 20:06:53.336169 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-12 20:06:53.336180 | orchestrator | Saturday 12 July 2025 20:03:58 +0000 (0:00:03.284) 0:00:14.017 ********* 2025-07-12 20:06:53.336191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.336209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.336224 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.336236 | orchestrator | 2025-07-12 20:06:53.336246 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-12 20:06:53.336257 | orchestrator | Saturday 12 July 2025 20:03:59 +0000 (0:00:00.937) 0:00:14.954 ********* 2025-07-12 20:06:53.336268 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.336279 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:53.336290 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:53.336301 | orchestrator | 2025-07-12 20:06:53.336312 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:06:53.336322 | orchestrator | Saturday 12 July 2025 20:04:03 +0000 (0:00:04.034) 0:00:18.989 ********* 2025-07-12 20:06:53.336333 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:06:53.336344 | orchestrator | 2025-07-12 20:06:53.336355 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 20:06:53.336366 | orchestrator | Saturday 12 July 2025 20:04:03 +0000 (0:00:00.569) 0:00:19.558 ********* 2025-07-12 20:06:53.336417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336433 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.336451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336470 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.336490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336502 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.336513 | orchestrator | 2025-07-12 20:06:53.336524 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 20:06:53.336536 | orchestrator | Saturday 12 July 2025 20:04:06 +0000 (0:00:02.966) 0:00:22.525 ********* 2025-07-12 20:06:53.336552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336570 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.336588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336600 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.336612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336629 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.336641 | orchestrator | 2025-07-12 20:06:53.336652 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 20:06:53.336663 | orchestrator | Saturday 12 July 2025 20:04:09 +0000 (0:00:02.591) 0:00:25.117 ********* 2025-07-12 20:06:53.336685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336698 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.336719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.336756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:06:53.336768 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.337217 | orchestrator | 2025-07-12 20:06:53.337236 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-12 20:06:53.337248 | orchestrator | Saturday 12 July 2025 20:04:11 +0000 (0:00:02.521) 0:00:27.638 ********* 2025-07-12 20:06:53.337272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.337350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.337390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:06:53.337411 | orchestrator | 2025-07-12 20:06:53.337422 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-12 20:06:53.337433 | orchestrator | Saturday 12 July 2025 20:04:15 +0000 (0:00:03.357) 0:00:30.996 ********* 2025-07-12 20:06:53.337444 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.337455 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:53.337466 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:53.337476 | orchestrator | 2025-07-12 20:06:53.337487 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-12 20:06:53.337498 | orchestrator | Saturday 12 July 2025 20:04:16 +0000 (0:00:01.152) 0:00:32.148 ********* 2025-07-12 20:06:53.337509 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.337520 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.337531 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.337542 | orchestrator | 2025-07-12 20:06:53.337553 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-12 20:06:53.337564 | orchestrator | Saturday 12 July 2025 20:04:16 +0000 (0:00:00.344) 0:00:32.493 ********* 2025-07-12 20:06:53.337574 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.337585 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.337596 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.337607 | orchestrator | 2025-07-12 20:06:53.337617 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-12 20:06:53.337628 | orchestrator | Saturday 12 July 2025 20:04:17 +0000 (0:00:00.348) 0:00:32.841 ********* 2025-07-12 20:06:53.337640 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-12 20:06:53.337652 | orchestrator | ...ignoring 2025-07-12 20:06:53.337669 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-12 20:06:53.337680 | orchestrator | ...ignoring 2025-07-12 20:06:53.337691 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-12 20:06:53.337701 | orchestrator | ...ignoring 2025-07-12 20:06:53.337712 | orchestrator | 2025-07-12 20:06:53.337723 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-12 20:06:53.337734 | orchestrator | Saturday 12 July 2025 20:04:27 +0000 (0:00:10.830) 0:00:43.672 ********* 2025-07-12 20:06:53.337745 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.337755 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.337766 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.337777 | orchestrator | 2025-07-12 20:06:53.337788 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-12 20:06:53.337799 | orchestrator | Saturday 12 July 2025 20:04:28 +0000 (0:00:00.634) 0:00:44.306 ********* 2025-07-12 20:06:53.337812 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.337825 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.337838 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.337850 | orchestrator | 2025-07-12 20:06:53.337863 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-12 20:06:53.337875 | orchestrator | Saturday 12 July 2025 20:04:29 +0000 (0:00:00.441) 0:00:44.748 ********* 2025-07-12 20:06:53.337887 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.337900 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.337913 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.337925 | orchestrator | 2025-07-12 20:06:53.337938 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-12 20:06:53.337951 | orchestrator | Saturday 12 July 2025 20:04:29 +0000 (0:00:00.434) 0:00:45.182 ********* 2025-07-12 20:06:53.337963 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.337983 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.338186 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.338209 | orchestrator | 2025-07-12 20:06:53.338220 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-12 20:06:53.338231 | orchestrator | Saturday 12 July 2025 20:04:29 +0000 (0:00:00.408) 0:00:45.591 ********* 2025-07-12 20:06:53.338241 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.338250 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.338260 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.338269 | orchestrator | 2025-07-12 20:06:53.338279 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-12 20:06:53.338289 | orchestrator | Saturday 12 July 2025 20:04:30 +0000 (0:00:00.665) 0:00:46.257 ********* 2025-07-12 20:06:53.338306 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.338317 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.338326 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.338336 | orchestrator | 2025-07-12 20:06:53.338345 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:06:53.338355 | orchestrator | Saturday 12 July 2025 20:04:30 +0000 (0:00:00.435) 0:00:46.692 ********* 2025-07-12 20:06:53.338364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.338374 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.338383 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-12 20:06:53.338393 | orchestrator | 2025-07-12 20:06:53.338403 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-12 20:06:53.338412 | orchestrator | Saturday 12 July 2025 20:04:31 +0000 (0:00:00.371) 0:00:47.064 ********* 2025-07-12 20:06:53.338422 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.338431 | orchestrator | 2025-07-12 20:06:53.338441 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-12 20:06:53.338450 | orchestrator | Saturday 12 July 2025 20:04:41 +0000 (0:00:10.426) 0:00:57.490 ********* 2025-07-12 20:06:53.338531 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.338541 | orchestrator | 2025-07-12 20:06:53.338551 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:06:53.338561 | orchestrator | Saturday 12 July 2025 20:04:41 +0000 (0:00:00.125) 0:00:57.616 ********* 2025-07-12 20:06:53.338609 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.338621 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.338631 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.338640 | orchestrator | 2025-07-12 20:06:53.338650 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-12 20:06:53.338659 | orchestrator | Saturday 12 July 2025 20:04:42 +0000 (0:00:01.003) 0:00:58.619 ********* 2025-07-12 20:06:53.338669 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.338679 | orchestrator | 2025-07-12 20:06:53.338689 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-12 20:06:53.338721 | orchestrator | Saturday 12 July 2025 20:04:50 +0000 (0:00:07.781) 0:01:06.400 ********* 2025-07-12 20:06:53.338732 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.338742 | orchestrator | 2025-07-12 20:06:53.338751 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-12 20:06:53.338761 | orchestrator | Saturday 12 July 2025 20:04:52 +0000 (0:00:01.567) 0:01:07.967 ********* 2025-07-12 20:06:53.338771 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.338780 | orchestrator | 2025-07-12 20:06:53.338790 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-12 20:06:53.338799 | orchestrator | Saturday 12 July 2025 20:04:54 +0000 (0:00:02.499) 0:01:10.467 ********* 2025-07-12 20:06:53.338832 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.338843 | orchestrator | 2025-07-12 20:06:53.338852 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-12 20:06:53.338862 | orchestrator | Saturday 12 July 2025 20:04:54 +0000 (0:00:00.121) 0:01:10.588 ********* 2025-07-12 20:06:53.338883 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.338893 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.338902 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.338912 | orchestrator | 2025-07-12 20:06:53.338921 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-12 20:06:53.338937 | orchestrator | Saturday 12 July 2025 20:04:55 +0000 (0:00:00.547) 0:01:11.136 ********* 2025-07-12 20:06:53.338947 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.338956 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 20:06:53.338966 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:53.339023 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:53.339033 | orchestrator | 2025-07-12 20:06:53.339043 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 20:06:53.339053 | orchestrator | skipping: no hosts matched 2025-07-12 20:06:53.339062 | orchestrator | 2025-07-12 20:06:53.339072 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:06:53.339081 | orchestrator | 2025-07-12 20:06:53.339091 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 20:06:53.339100 | orchestrator | Saturday 12 July 2025 20:04:55 +0000 (0:00:00.358) 0:01:11.494 ********* 2025-07-12 20:06:53.339110 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:53.339120 | orchestrator | 2025-07-12 20:06:53.339129 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 20:06:53.339139 | orchestrator | Saturday 12 July 2025 20:05:14 +0000 (0:00:18.719) 0:01:30.214 ********* 2025-07-12 20:06:53.339148 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.339158 | orchestrator | 2025-07-12 20:06:53.339170 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 20:06:53.339180 | orchestrator | Saturday 12 July 2025 20:05:35 +0000 (0:00:20.654) 0:01:50.868 ********* 2025-07-12 20:06:53.339191 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.339202 | orchestrator | 2025-07-12 20:06:53.339213 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:06:53.339224 | orchestrator | 2025-07-12 20:06:53.339235 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 20:06:53.339246 | orchestrator | Saturday 12 July 2025 20:05:37 +0000 (0:00:02.562) 0:01:53.430 ********* 2025-07-12 20:06:53.339257 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:53.339268 | orchestrator | 2025-07-12 20:06:53.339278 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 20:06:53.339289 | orchestrator | Saturday 12 July 2025 20:05:56 +0000 (0:00:19.058) 0:02:12.488 ********* 2025-07-12 20:06:53.339301 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.339311 | orchestrator | 2025-07-12 20:06:53.339322 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 20:06:53.339333 | orchestrator | Saturday 12 July 2025 20:06:17 +0000 (0:00:20.564) 0:02:33.053 ********* 2025-07-12 20:06:53.339343 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.339355 | orchestrator | 2025-07-12 20:06:53.339366 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 20:06:53.339377 | orchestrator | 2025-07-12 20:06:53.339395 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 20:06:53.339408 | orchestrator | Saturday 12 July 2025 20:06:20 +0000 (0:00:02.750) 0:02:35.803 ********* 2025-07-12 20:06:53.339418 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.339429 | orchestrator | 2025-07-12 20:06:53.339440 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 20:06:53.339451 | orchestrator | Saturday 12 July 2025 20:06:30 +0000 (0:00:10.670) 0:02:46.473 ********* 2025-07-12 20:06:53.339462 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.339473 | orchestrator | 2025-07-12 20:06:53.339484 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 20:06:53.339503 | orchestrator | Saturday 12 July 2025 20:06:36 +0000 (0:00:05.523) 0:02:51.996 ********* 2025-07-12 20:06:53.339514 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.339525 | orchestrator | 2025-07-12 20:06:53.339535 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 20:06:53.339547 | orchestrator | 2025-07-12 20:06:53.339556 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 20:06:53.339566 | orchestrator | Saturday 12 July 2025 20:06:38 +0000 (0:00:02.264) 0:02:54.261 ********* 2025-07-12 20:06:53.339575 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:06:53.339585 | orchestrator | 2025-07-12 20:06:53.339594 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-12 20:06:53.339604 | orchestrator | Saturday 12 July 2025 20:06:39 +0000 (0:00:00.509) 0:02:54.770 ********* 2025-07-12 20:06:53.339613 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.339623 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.339633 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.339642 | orchestrator | 2025-07-12 20:06:53.339652 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-12 20:06:53.339661 | orchestrator | Saturday 12 July 2025 20:06:41 +0000 (0:00:02.556) 0:02:57.326 ********* 2025-07-12 20:06:53.339671 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.339681 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.339690 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.339699 | orchestrator | 2025-07-12 20:06:53.339709 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-12 20:06:53.339719 | orchestrator | Saturday 12 July 2025 20:06:43 +0000 (0:00:02.168) 0:02:59.495 ********* 2025-07-12 20:06:53.339728 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.339738 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.339747 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.339757 | orchestrator | 2025-07-12 20:06:53.339767 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-12 20:06:53.339776 | orchestrator | Saturday 12 July 2025 20:06:46 +0000 (0:00:02.332) 0:03:01.828 ********* 2025-07-12 20:06:53.339786 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.339795 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.339805 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:53.339814 | orchestrator | 2025-07-12 20:06:53.339824 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-12 20:06:53.339834 | orchestrator | Saturday 12 July 2025 20:06:48 +0000 (0:00:02.197) 0:03:04.026 ********* 2025-07-12 20:06:53.339843 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:53.339853 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:53.339862 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:53.339872 | orchestrator | 2025-07-12 20:06:53.339886 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 20:06:53.339896 | orchestrator | Saturday 12 July 2025 20:06:51 +0000 (0:00:02.949) 0:03:06.975 ********* 2025-07-12 20:06:53.339906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:06:53.339915 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:06:53.339925 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:06:53.339934 | orchestrator | 2025-07-12 20:06:53.339944 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:06:53.339954 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 20:06:53.339964 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-12 20:06:53.339975 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 20:06:53.339984 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 20:06:53.340052 | orchestrator | 2025-07-12 20:06:53.340064 | orchestrator | 2025-07-12 20:06:53.340073 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:06:53.340083 | orchestrator | Saturday 12 July 2025 20:06:51 +0000 (0:00:00.224) 0:03:07.199 ********* 2025-07-12 20:06:53.340093 | orchestrator | =============================================================================== 2025-07-12 20:06:53.340103 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.22s 2025-07-12 20:06:53.340112 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.78s 2025-07-12 20:06:53.340122 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.83s 2025-07-12 20:06:53.340132 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.67s 2025-07-12 20:06:53.340141 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.43s 2025-07-12 20:06:53.340151 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.78s 2025-07-12 20:06:53.340167 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.52s 2025-07-12 20:06:53.340177 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.31s 2025-07-12 20:06:53.340187 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.03s 2025-07-12 20:06:53.340197 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.50s 2025-07-12 20:06:53.340206 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.36s 2025-07-12 20:06:53.340216 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.28s 2025-07-12 20:06:53.340226 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.97s 2025-07-12 20:06:53.340235 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.95s 2025-07-12 20:06:53.340245 | orchestrator | Check MariaDB service --------------------------------------------------- 2.80s 2025-07-12 20:06:53.340252 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.59s 2025-07-12 20:06:53.340260 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.56s 2025-07-12 20:06:53.340268 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.52s 2025-07-12 20:06:53.340276 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.50s 2025-07-12 20:06:53.340284 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.33s 2025-07-12 20:06:53.340292 | orchestrator | 2025-07-12 20:06:53 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:06:53.340300 | orchestrator | 2025-07-12 20:06:53 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:53.340308 | orchestrator | 2025-07-12 20:06:53 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:06:53.340316 | orchestrator | 2025-07-12 20:06:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:56.378658 | orchestrator | 2025-07-12 20:06:56 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:06:56.378877 | orchestrator | 2025-07-12 20:06:56 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:56.379947 | orchestrator | 2025-07-12 20:06:56 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:06:56.379969 | orchestrator | 2025-07-12 20:06:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:06:59.419910 | orchestrator | 2025-07-12 20:06:59 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:06:59.421698 | orchestrator | 2025-07-12 20:06:59 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:06:59.423682 | orchestrator | 2025-07-12 20:06:59 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:06:59.423808 | orchestrator | 2025-07-12 20:06:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:02.459662 | orchestrator | 2025-07-12 20:07:02 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:02.459831 | orchestrator | 2025-07-12 20:07:02 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:02.460627 | orchestrator | 2025-07-12 20:07:02 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:02.461161 | orchestrator | 2025-07-12 20:07:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:05.510618 | orchestrator | 2025-07-12 20:07:05 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:05.510719 | orchestrator | 2025-07-12 20:07:05 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:05.514180 | orchestrator | 2025-07-12 20:07:05 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:05.514227 | orchestrator | 2025-07-12 20:07:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:08.548571 | orchestrator | 2025-07-12 20:07:08 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:08.549718 | orchestrator | 2025-07-12 20:07:08 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:08.551475 | orchestrator | 2025-07-12 20:07:08 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:08.551500 | orchestrator | 2025-07-12 20:07:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:11.588874 | orchestrator | 2025-07-12 20:07:11 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:11.589378 | orchestrator | 2025-07-12 20:07:11 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:11.596072 | orchestrator | 2025-07-12 20:07:11 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:11.596113 | orchestrator | 2025-07-12 20:07:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:14.640484 | orchestrator | 2025-07-12 20:07:14 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:14.642568 | orchestrator | 2025-07-12 20:07:14 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:14.645181 | orchestrator | 2025-07-12 20:07:14 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:14.645694 | orchestrator | 2025-07-12 20:07:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:17.676428 | orchestrator | 2025-07-12 20:07:17 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:17.676515 | orchestrator | 2025-07-12 20:07:17 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:17.676531 | orchestrator | 2025-07-12 20:07:17 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:17.676543 | orchestrator | 2025-07-12 20:07:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:20.704464 | orchestrator | 2025-07-12 20:07:20 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:20.708362 | orchestrator | 2025-07-12 20:07:20 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:20.709254 | orchestrator | 2025-07-12 20:07:20 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:20.709282 | orchestrator | 2025-07-12 20:07:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:23.747150 | orchestrator | 2025-07-12 20:07:23 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:23.748328 | orchestrator | 2025-07-12 20:07:23 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:23.749657 | orchestrator | 2025-07-12 20:07:23 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:23.749668 | orchestrator | 2025-07-12 20:07:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:26.784391 | orchestrator | 2025-07-12 20:07:26 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:26.785919 | orchestrator | 2025-07-12 20:07:26 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:26.785970 | orchestrator | 2025-07-12 20:07:26 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:26.785983 | orchestrator | 2025-07-12 20:07:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:29.846986 | orchestrator | 2025-07-12 20:07:29 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:29.848169 | orchestrator | 2025-07-12 20:07:29 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:29.849824 | orchestrator | 2025-07-12 20:07:29 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:29.849866 | orchestrator | 2025-07-12 20:07:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:32.889329 | orchestrator | 2025-07-12 20:07:32 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:32.890636 | orchestrator | 2025-07-12 20:07:32 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:32.892202 | orchestrator | 2025-07-12 20:07:32 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:32.892473 | orchestrator | 2025-07-12 20:07:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:35.927921 | orchestrator | 2025-07-12 20:07:35 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:35.928727 | orchestrator | 2025-07-12 20:07:35 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:35.929420 | orchestrator | 2025-07-12 20:07:35 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:35.929446 | orchestrator | 2025-07-12 20:07:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:38.972434 | orchestrator | 2025-07-12 20:07:38 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:38.972836 | orchestrator | 2025-07-12 20:07:38 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:38.975195 | orchestrator | 2025-07-12 20:07:38 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:38.975300 | orchestrator | 2025-07-12 20:07:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:42.025213 | orchestrator | 2025-07-12 20:07:42 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:42.026170 | orchestrator | 2025-07-12 20:07:42 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:42.027395 | orchestrator | 2025-07-12 20:07:42 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:42.027456 | orchestrator | 2025-07-12 20:07:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:45.078237 | orchestrator | 2025-07-12 20:07:45 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:45.080660 | orchestrator | 2025-07-12 20:07:45 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:45.081853 | orchestrator | 2025-07-12 20:07:45 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:45.081888 | orchestrator | 2025-07-12 20:07:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:48.126922 | orchestrator | 2025-07-12 20:07:48 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:48.128315 | orchestrator | 2025-07-12 20:07:48 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:48.130367 | orchestrator | 2025-07-12 20:07:48 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:48.130420 | orchestrator | 2025-07-12 20:07:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:51.169468 | orchestrator | 2025-07-12 20:07:51 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:51.170805 | orchestrator | 2025-07-12 20:07:51 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state STARTED 2025-07-12 20:07:51.172517 | orchestrator | 2025-07-12 20:07:51 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:51.172645 | orchestrator | 2025-07-12 20:07:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:54.211880 | orchestrator | 2025-07-12 20:07:54 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:07:54.213949 | orchestrator | 2025-07-12 20:07:54 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:54.217742 | orchestrator | 2025-07-12 20:07:54 | INFO  | Task 4fbd0bb3-c44d-40f7-8492-854c05bb4447 is in state SUCCESS 2025-07-12 20:07:54.220087 | orchestrator | 2025-07-12 20:07:54.220148 | orchestrator | 2025-07-12 20:07:54.220162 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-12 20:07:54.220175 | orchestrator | 2025-07-12 20:07:54.220186 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 20:07:54.220198 | orchestrator | Saturday 12 July 2025 20:05:45 +0000 (0:00:00.584) 0:00:00.584 ********* 2025-07-12 20:07:54.220210 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:07:54.220222 | orchestrator | 2025-07-12 20:07:54.220233 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 20:07:54.220246 | orchestrator | Saturday 12 July 2025 20:05:45 +0000 (0:00:00.614) 0:00:01.199 ********* 2025-07-12 20:07:54.220333 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.220353 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.220464 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.220488 | orchestrator | 2025-07-12 20:07:54.220507 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 20:07:54.220526 | orchestrator | Saturday 12 July 2025 20:05:46 +0000 (0:00:00.643) 0:00:01.842 ********* 2025-07-12 20:07:54.220606 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.220621 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.220633 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.220653 | orchestrator | 2025-07-12 20:07:54.220672 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 20:07:54.220700 | orchestrator | Saturday 12 July 2025 20:05:46 +0000 (0:00:00.292) 0:00:02.135 ********* 2025-07-12 20:07:54.220722 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.220863 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.220995 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.221478 | orchestrator | 2025-07-12 20:07:54.221502 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 20:07:54.221520 | orchestrator | Saturday 12 July 2025 20:05:47 +0000 (0:00:00.769) 0:00:02.904 ********* 2025-07-12 20:07:54.221537 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.221554 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.221572 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.221597 | orchestrator | 2025-07-12 20:07:54.221619 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 20:07:54.221637 | orchestrator | Saturday 12 July 2025 20:05:47 +0000 (0:00:00.296) 0:00:03.201 ********* 2025-07-12 20:07:54.221653 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.221668 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.221685 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.221702 | orchestrator | 2025-07-12 20:07:54.221720 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 20:07:54.221736 | orchestrator | Saturday 12 July 2025 20:05:48 +0000 (0:00:00.295) 0:00:03.496 ********* 2025-07-12 20:07:54.221753 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.221769 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.221785 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.221802 | orchestrator | 2025-07-12 20:07:54.221819 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 20:07:54.221837 | orchestrator | Saturday 12 July 2025 20:05:48 +0000 (0:00:00.316) 0:00:03.812 ********* 2025-07-12 20:07:54.221856 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.221878 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.221897 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.221921 | orchestrator | 2025-07-12 20:07:54.221952 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 20:07:54.221975 | orchestrator | Saturday 12 July 2025 20:05:49 +0000 (0:00:00.473) 0:00:04.286 ********* 2025-07-12 20:07:54.222105 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.222135 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.222156 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.222176 | orchestrator | 2025-07-12 20:07:54.222193 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 20:07:54.222209 | orchestrator | Saturday 12 July 2025 20:05:49 +0000 (0:00:00.288) 0:00:04.575 ********* 2025-07-12 20:07:54.222229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:07:54.222243 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:07:54.222256 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:07:54.222269 | orchestrator | 2025-07-12 20:07:54.222282 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 20:07:54.222294 | orchestrator | Saturday 12 July 2025 20:05:49 +0000 (0:00:00.600) 0:00:05.175 ********* 2025-07-12 20:07:54.222307 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.222319 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.222332 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.222344 | orchestrator | 2025-07-12 20:07:54.222357 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 20:07:54.222370 | orchestrator | Saturday 12 July 2025 20:05:50 +0000 (0:00:00.405) 0:00:05.580 ********* 2025-07-12 20:07:54.222382 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:07:54.222394 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:07:54.222405 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:07:54.222416 | orchestrator | 2025-07-12 20:07:54.222427 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 20:07:54.222480 | orchestrator | Saturday 12 July 2025 20:05:52 +0000 (0:00:02.031) 0:00:07.612 ********* 2025-07-12 20:07:54.222492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 20:07:54.222506 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 20:07:54.222524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 20:07:54.222540 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.222551 | orchestrator | 2025-07-12 20:07:54.222575 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 20:07:54.222607 | orchestrator | Saturday 12 July 2025 20:05:52 +0000 (0:00:00.390) 0:00:08.002 ********* 2025-07-12 20:07:54.222623 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.222645 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.222656 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.222667 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.222678 | orchestrator | 2025-07-12 20:07:54.222689 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 20:07:54.222700 | orchestrator | Saturday 12 July 2025 20:05:53 +0000 (0:00:00.739) 0:00:08.741 ********* 2025-07-12 20:07:54.222714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.222729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.222741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.222753 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.222764 | orchestrator | 2025-07-12 20:07:54.222775 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 20:07:54.222786 | orchestrator | Saturday 12 July 2025 20:05:53 +0000 (0:00:00.158) 0:00:08.899 ********* 2025-07-12 20:07:54.222800 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '5516968339c6', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 20:05:50.973997', 'end': '2025-07-12 20:05:51.011983', 'delta': '0:00:00.037986', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['5516968339c6'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-12 20:07:54.222825 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '84bd2201c0a5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 20:05:51.670636', 'end': '2025-07-12 20:05:51.713983', 'delta': '0:00:00.043347', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['84bd2201c0a5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-12 20:07:54.222855 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '145ffe361713', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 20:05:52.227465', 'end': '2025-07-12 20:05:52.261954', 'delta': '0:00:00.034489', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['145ffe361713'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-12 20:07:54.222867 | orchestrator | 2025-07-12 20:07:54.222878 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 20:07:54.222893 | orchestrator | Saturday 12 July 2025 20:05:54 +0000 (0:00:00.356) 0:00:09.256 ********* 2025-07-12 20:07:54.222912 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.222924 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.222941 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.222958 | orchestrator | 2025-07-12 20:07:54.222969 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 20:07:54.222980 | orchestrator | Saturday 12 July 2025 20:05:54 +0000 (0:00:00.418) 0:00:09.674 ********* 2025-07-12 20:07:54.222991 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-12 20:07:54.223071 | orchestrator | 2025-07-12 20:07:54.223094 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 20:07:54.223107 | orchestrator | Saturday 12 July 2025 20:05:56 +0000 (0:00:02.310) 0:00:11.985 ********* 2025-07-12 20:07:54.223118 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223129 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223140 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223151 | orchestrator | 2025-07-12 20:07:54.223162 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 20:07:54.223173 | orchestrator | Saturday 12 July 2025 20:05:57 +0000 (0:00:00.277) 0:00:12.263 ********* 2025-07-12 20:07:54.223183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223194 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223205 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223216 | orchestrator | 2025-07-12 20:07:54.223227 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:07:54.223238 | orchestrator | Saturday 12 July 2025 20:05:57 +0000 (0:00:00.387) 0:00:12.651 ********* 2025-07-12 20:07:54.223249 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223259 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223270 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223281 | orchestrator | 2025-07-12 20:07:54.223292 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 20:07:54.223303 | orchestrator | Saturday 12 July 2025 20:05:57 +0000 (0:00:00.472) 0:00:13.124 ********* 2025-07-12 20:07:54.223324 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.223335 | orchestrator | 2025-07-12 20:07:54.223346 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 20:07:54.223360 | orchestrator | Saturday 12 July 2025 20:05:58 +0000 (0:00:00.135) 0:00:13.259 ********* 2025-07-12 20:07:54.223383 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223410 | orchestrator | 2025-07-12 20:07:54.223429 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:07:54.223446 | orchestrator | Saturday 12 July 2025 20:05:58 +0000 (0:00:00.224) 0:00:13.483 ********* 2025-07-12 20:07:54.223464 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223481 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223497 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223514 | orchestrator | 2025-07-12 20:07:54.223531 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 20:07:54.223547 | orchestrator | Saturday 12 July 2025 20:05:58 +0000 (0:00:00.289) 0:00:13.772 ********* 2025-07-12 20:07:54.223560 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223570 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223580 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223590 | orchestrator | 2025-07-12 20:07:54.223599 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 20:07:54.223614 | orchestrator | Saturday 12 July 2025 20:05:58 +0000 (0:00:00.335) 0:00:14.107 ********* 2025-07-12 20:07:54.223630 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223640 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223650 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223659 | orchestrator | 2025-07-12 20:07:54.223669 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 20:07:54.223679 | orchestrator | Saturday 12 July 2025 20:05:59 +0000 (0:00:00.504) 0:00:14.612 ********* 2025-07-12 20:07:54.223689 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223700 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223717 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223729 | orchestrator | 2025-07-12 20:07:54.223739 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 20:07:54.223749 | orchestrator | Saturday 12 July 2025 20:05:59 +0000 (0:00:00.347) 0:00:14.959 ********* 2025-07-12 20:07:54.223759 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223768 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223778 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223788 | orchestrator | 2025-07-12 20:07:54.223797 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 20:07:54.223807 | orchestrator | Saturday 12 July 2025 20:06:00 +0000 (0:00:00.335) 0:00:15.295 ********* 2025-07-12 20:07:54.223817 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223826 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223836 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223846 | orchestrator | 2025-07-12 20:07:54.223864 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 20:07:54.223883 | orchestrator | Saturday 12 July 2025 20:06:00 +0000 (0:00:00.306) 0:00:15.601 ********* 2025-07-12 20:07:54.223894 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.223904 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.223913 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.223923 | orchestrator | 2025-07-12 20:07:54.223933 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 20:07:54.223943 | orchestrator | Saturday 12 July 2025 20:06:00 +0000 (0:00:00.490) 0:00:16.091 ********* 2025-07-12 20:07:54.223954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e', 'dm-uuid-LVM-E4eL0LCKh1BPKY8m2SRlztTYqYZwNxGHdCPWgbJnJgpsuF01ckXDgnYtveU2JBvH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.223975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b', 'dm-uuid-LVM-MhXrNIYhW041vv8F14dWtTjGOwNwuQZklnSVX9pu8rZxwvpajueBUzVQ08pTYxHG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.223986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.223997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aM5aYs-O17P-23z5-vw4u-RED1-bHgy-2Qq0cS', 'scsi-0QEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9', 'scsi-SQEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VAW22O-gcuB-Pls3-j1kL-HI2W-ihHd-pGfR7E', 'scsi-0QEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94', 'scsi-SQEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418', 'scsi-SQEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58', 'dm-uuid-LVM-lyZgmPFNbStq4ZjJ5YzNYvvGdw7sbdHU6rfBnK9q8FkkqwaYN2SALLc0g8VAlILf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224213 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791', 'dm-uuid-LVM-MEOVNephN7hzmyal4PNe2WbCkByuS3py5A19FOo2P8GuaxCIo2W6IWNF7okT5PDR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224233 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224300 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.224314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xbaqud-QFcO-hkZ1-R2n7-smvj-mLc2-DdeLaP', 'scsi-0QEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7', 'scsi-SQEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9AqmV-XoN7-NG0Q-oNME-OAER-Ejob-7U7cUx', 'scsi-0QEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350', 'scsi-SQEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb', 'scsi-SQEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224437 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.224453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a', 'dm-uuid-LVM-TdYNvufdYHm7xfhdXH7cFx9dQQGYc1tDnH9PGQBkBNzkl3uLDheiVs9v9EI4xx3K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028', 'dm-uuid-LVM-O1VDBnk7la3dA9fvBRCK8INxI7gUKwapmWzNjIh5Dt5coqHLZWucSQFbeq1udFyd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224556 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:07:54.224659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d2D0Ta-V3e8-9KEz-3AwB-4e3O-dsh5-WwWg4F', 'scsi-0QEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8', 'scsi-SQEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E59y2T-5LhQ-PhO1-6zgU-tgBF-5nbX-Or2zhA', 'scsi-0QEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28', 'scsi-SQEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224708 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914', 'scsi-SQEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:07:54.224761 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.224776 | orchestrator | 2025-07-12 20:07:54.224790 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 20:07:54.224804 | orchestrator | Saturday 12 July 2025 20:06:01 +0000 (0:00:00.599) 0:00:16.691 ********* 2025-07-12 20:07:54.224820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e', 'dm-uuid-LVM-E4eL0LCKh1BPKY8m2SRlztTYqYZwNxGHdCPWgbJnJgpsuF01ckXDgnYtveU2JBvH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b', 'dm-uuid-LVM-MhXrNIYhW041vv8F14dWtTjGOwNwuQZklnSVX9pu8rZxwvpajueBUzVQ08pTYxHG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224928 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224965 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.224983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225045 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58', 'dm-uuid-LVM-lyZgmPFNbStq4ZjJ5YzNYvvGdw7sbdHU6rfBnK9q8FkkqwaYN2SALLc0g8VAlILf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225096 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16', 'scsi-SQEMU_QEMU_HARDDISK_8365c504-c177-40d4-a7fd-588ef9dda518-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225114 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791', 'dm-uuid-LVM-MEOVNephN7hzmyal4PNe2WbCkByuS3py5A19FOo2P8GuaxCIo2W6IWNF7okT5PDR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225132 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--d5945923--5bd4--5f45--a4a9--07ddacb4606e-osd--block--d5945923--5bd4--5f45--a4a9--07ddacb4606e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aM5aYs-O17P-23z5-vw4u-RED1-bHgy-2Qq0cS', 'scsi-0QEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9', 'scsi-SQEMU_QEMU_HARDDISK_a9737b61-a1af-4e5f-b757-491f643427f9'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--661525d0--45b6--5e60--bde8--1fec1e4af76b-osd--block--661525d0--45b6--5e60--bde8--1fec1e4af76b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VAW22O-gcuB-Pls3-j1kL-HI2W-ihHd-pGfR7E', 'scsi-0QEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94', 'scsi-SQEMU_QEMU_HARDDISK_da649787-4cd2-466e-b254-be39940a6b94'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225211 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225228 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418', 'scsi-SQEMU_QEMU_HARDDISK_5dd700c9-bc5e-4428-837a-aadccc164418'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-35-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225321 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225352 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225410 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16', 'scsi-SQEMU_QEMU_HARDDISK_d95ed02d-de93-4ced-b5a0-253568193ec9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225428 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--aa90e2bf--e75d--5c47--ae76--8a1384e00d58-osd--block--aa90e2bf--e75d--5c47--ae76--8a1384e00d58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Xbaqud-QFcO-hkZ1-R2n7-smvj-mLc2-DdeLaP', 'scsi-0QEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7', 'scsi-SQEMU_QEMU_HARDDISK_0a88cf92-9e41-408b-a9d0-3b2da488fdc7'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225444 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.225462 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2f895b30--8de9--512a--b128--a5c9585d4791-osd--block--2f895b30--8de9--512a--b128--a5c9585d4791'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-V9AqmV-XoN7-NG0Q-oNME-OAER-Ejob-7U7cUx', 'scsi-0QEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350', 'scsi-SQEMU_QEMU_HARDDISK_e599fd12-11ff-4888-9095-9cc0b7d1a350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.225492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb', 'scsi-SQEMU_QEMU_HARDDISK_b969ddd5-1efc-4cf4-bfb5-791b8cfcdfbb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-33-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226478 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.226501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a', 'dm-uuid-LVM-TdYNvufdYHm7xfhdXH7cFx9dQQGYc1tDnH9PGQBkBNzkl3uLDheiVs9v9EI4xx3K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226515 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028', 'dm-uuid-LVM-O1VDBnk7la3dA9fvBRCK8INxI7gUKwapmWzNjIh5Dt5coqHLZWucSQFbeq1udFyd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226666 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226689 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16', 'scsi-SQEMU_QEMU_HARDDISK_607076d3-244d-457e-a6e9-84d454d62909-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226758 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a-osd--block--2d3a8e2a--8518--5d0a--afd8--96cafa5ccf1a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-d2D0Ta-V3e8-9KEz-3AwB-4e3O-dsh5-WwWg4F', 'scsi-0QEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8', 'scsi-SQEMU_QEMU_HARDDISK_5819bef8-cd9e-4d02-a2ff-d780fbc6c5f8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--71032f38--677b--542f--825f--c43a6d71b028-osd--block--71032f38--677b--542f--825f--c43a6d71b028'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-E59y2T-5LhQ-PhO1-6zgU-tgBF-5nbX-Or2zhA', 'scsi-0QEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28', 'scsi-SQEMU_QEMU_HARDDISK_f459a603-bea4-4ea2-b1cd-cecdf48dbc28'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914', 'scsi-SQEMU_QEMU_HARDDISK_0c476a5e-2a4b-4838-9c87-337753775914'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226811 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-15-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:07:54.226823 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.226835 | orchestrator | 2025-07-12 20:07:54.226847 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 20:07:54.226859 | orchestrator | Saturday 12 July 2025 20:06:02 +0000 (0:00:00.675) 0:00:17.366 ********* 2025-07-12 20:07:54.226871 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.226882 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.226893 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.226904 | orchestrator | 2025-07-12 20:07:54.226954 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 20:07:54.226967 | orchestrator | Saturday 12 July 2025 20:06:02 +0000 (0:00:00.646) 0:00:18.013 ********* 2025-07-12 20:07:54.226978 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.226989 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.226999 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.227046 | orchestrator | 2025-07-12 20:07:54.227096 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:07:54.227108 | orchestrator | Saturday 12 July 2025 20:06:03 +0000 (0:00:00.475) 0:00:18.489 ********* 2025-07-12 20:07:54.227119 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.227196 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.227217 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.227235 | orchestrator | 2025-07-12 20:07:54.227253 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:07:54.227271 | orchestrator | Saturday 12 July 2025 20:06:03 +0000 (0:00:00.624) 0:00:19.113 ********* 2025-07-12 20:07:54.227311 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.227328 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.227347 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.227362 | orchestrator | 2025-07-12 20:07:54.227380 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:07:54.227397 | orchestrator | Saturday 12 July 2025 20:06:04 +0000 (0:00:00.285) 0:00:19.399 ********* 2025-07-12 20:07:54.227415 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.227430 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.227448 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.227465 | orchestrator | 2025-07-12 20:07:54.227484 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:07:54.227501 | orchestrator | Saturday 12 July 2025 20:06:04 +0000 (0:00:00.419) 0:00:19.818 ********* 2025-07-12 20:07:54.227519 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.227537 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.227555 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.227575 | orchestrator | 2025-07-12 20:07:54.227594 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 20:07:54.227606 | orchestrator | Saturday 12 July 2025 20:06:05 +0000 (0:00:00.518) 0:00:20.337 ********* 2025-07-12 20:07:54.227617 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 20:07:54.227629 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 20:07:54.227640 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 20:07:54.227651 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 20:07:54.227662 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 20:07:54.227673 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 20:07:54.227683 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 20:07:54.227694 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 20:07:54.227705 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 20:07:54.227716 | orchestrator | 2025-07-12 20:07:54.227727 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 20:07:54.227738 | orchestrator | Saturday 12 July 2025 20:06:06 +0000 (0:00:00.884) 0:00:21.222 ********* 2025-07-12 20:07:54.227749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 20:07:54.227761 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 20:07:54.227771 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 20:07:54.227782 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.227793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 20:07:54.227803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 20:07:54.227814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 20:07:54.227825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.227836 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 20:07:54.227847 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 20:07:54.227858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 20:07:54.227913 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.227927 | orchestrator | 2025-07-12 20:07:54.227939 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 20:07:54.227951 | orchestrator | Saturday 12 July 2025 20:06:06 +0000 (0:00:00.391) 0:00:21.613 ********* 2025-07-12 20:07:54.227963 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:07:54.227974 | orchestrator | 2025-07-12 20:07:54.227986 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 20:07:54.228135 | orchestrator | Saturday 12 July 2025 20:06:07 +0000 (0:00:00.686) 0:00:22.299 ********* 2025-07-12 20:07:54.228154 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.228175 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.228187 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.228198 | orchestrator | 2025-07-12 20:07:54.228223 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 20:07:54.228235 | orchestrator | Saturday 12 July 2025 20:06:07 +0000 (0:00:00.302) 0:00:22.601 ********* 2025-07-12 20:07:54.228246 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.228257 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.228267 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.228278 | orchestrator | 2025-07-12 20:07:54.228289 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 20:07:54.228300 | orchestrator | Saturday 12 July 2025 20:06:07 +0000 (0:00:00.309) 0:00:22.911 ********* 2025-07-12 20:07:54.228311 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.228322 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.228333 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:07:54.228344 | orchestrator | 2025-07-12 20:07:54.228355 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 20:07:54.228366 | orchestrator | Saturday 12 July 2025 20:06:08 +0000 (0:00:00.309) 0:00:23.220 ********* 2025-07-12 20:07:54.228377 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.228388 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.228399 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.228410 | orchestrator | 2025-07-12 20:07:54.228421 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 20:07:54.228432 | orchestrator | Saturday 12 July 2025 20:06:08 +0000 (0:00:00.597) 0:00:23.817 ********* 2025-07-12 20:07:54.228443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:07:54.228454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:07:54.228465 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:07:54.228476 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.228487 | orchestrator | 2025-07-12 20:07:54.228498 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 20:07:54.228509 | orchestrator | Saturday 12 July 2025 20:06:08 +0000 (0:00:00.369) 0:00:24.187 ********* 2025-07-12 20:07:54.228520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:07:54.228531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:07:54.228542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:07:54.228553 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.228563 | orchestrator | 2025-07-12 20:07:54.228575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 20:07:54.228586 | orchestrator | Saturday 12 July 2025 20:06:09 +0000 (0:00:00.353) 0:00:24.541 ********* 2025-07-12 20:07:54.228629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:07:54.228643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:07:54.228654 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:07:54.228665 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.228676 | orchestrator | 2025-07-12 20:07:54.228687 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 20:07:54.228699 | orchestrator | Saturday 12 July 2025 20:06:09 +0000 (0:00:00.359) 0:00:24.900 ********* 2025-07-12 20:07:54.228710 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:07:54.228721 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:07:54.228732 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:07:54.228743 | orchestrator | 2025-07-12 20:07:54.228754 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 20:07:54.228765 | orchestrator | Saturday 12 July 2025 20:06:10 +0000 (0:00:00.325) 0:00:25.226 ********* 2025-07-12 20:07:54.228785 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:07:54.228797 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 20:07:54.228808 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 20:07:54.228819 | orchestrator | 2025-07-12 20:07:54.228830 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 20:07:54.228841 | orchestrator | Saturday 12 July 2025 20:06:10 +0000 (0:00:00.509) 0:00:25.735 ********* 2025-07-12 20:07:54.228852 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:07:54.228864 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:07:54.228905 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:07:54.228918 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 20:07:54.228930 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:07:54.228941 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:07:54.228952 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:07:54.228963 | orchestrator | 2025-07-12 20:07:54.228975 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 20:07:54.228986 | orchestrator | Saturday 12 July 2025 20:06:11 +0000 (0:00:00.948) 0:00:26.684 ********* 2025-07-12 20:07:54.229053 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:07:54.229074 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:07:54.229094 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:07:54.229106 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 20:07:54.229117 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:07:54.229128 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:07:54.229146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:07:54.229157 | orchestrator | 2025-07-12 20:07:54.229177 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-12 20:07:54.229188 | orchestrator | Saturday 12 July 2025 20:06:13 +0000 (0:00:01.849) 0:00:28.533 ********* 2025-07-12 20:07:54.229199 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:07:54.229210 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:07:54.229222 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-12 20:07:54.229233 | orchestrator | 2025-07-12 20:07:54.229243 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-12 20:07:54.229254 | orchestrator | Saturday 12 July 2025 20:06:13 +0000 (0:00:00.374) 0:00:28.908 ********* 2025-07-12 20:07:54.229267 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:07:54.229280 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:07:54.229291 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:07:54.229303 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:07:54.229323 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:07:54.229335 | orchestrator | 2025-07-12 20:07:54.229346 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-12 20:07:54.229357 | orchestrator | Saturday 12 July 2025 20:06:58 +0000 (0:00:44.743) 0:01:13.652 ********* 2025-07-12 20:07:54.229368 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229390 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229401 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229422 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229433 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-12 20:07:54.229444 | orchestrator | 2025-07-12 20:07:54.229455 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-12 20:07:54.229466 | orchestrator | Saturday 12 July 2025 20:07:22 +0000 (0:00:23.830) 0:01:37.482 ********* 2025-07-12 20:07:54.229476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229520 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229532 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229544 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229555 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229565 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229576 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:07:54.229587 | orchestrator | 2025-07-12 20:07:54.229598 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-12 20:07:54.229609 | orchestrator | Saturday 12 July 2025 20:07:34 +0000 (0:00:12.332) 0:01:49.815 ********* 2025-07-12 20:07:54.229621 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229632 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:07:54.229643 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:07:54.229653 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229664 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:07:54.229681 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:07:54.229699 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229711 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:07:54.229722 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:07:54.229732 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229744 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:07:54.229763 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:07:54.229774 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229785 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:07:54.229796 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:07:54.229807 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:07:54.229818 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:07:54.229829 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:07:54.229840 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-12 20:07:54.229851 | orchestrator | 2025-07-12 20:07:54.229862 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:07:54.229873 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-12 20:07:54.229885 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 20:07:54.229896 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 20:07:54.229908 | orchestrator | 2025-07-12 20:07:54.229919 | orchestrator | 2025-07-12 20:07:54.229930 | orchestrator | 2025-07-12 20:07:54.229941 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:07:54.229952 | orchestrator | Saturday 12 July 2025 20:07:52 +0000 (0:00:18.133) 0:02:07.948 ********* 2025-07-12 20:07:54.229963 | orchestrator | =============================================================================== 2025-07-12 20:07:54.229974 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.74s 2025-07-12 20:07:54.229984 | orchestrator | generate keys ---------------------------------------------------------- 23.83s 2025-07-12 20:07:54.229995 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.13s 2025-07-12 20:07:54.230110 | orchestrator | get keys from monitors ------------------------------------------------- 12.33s 2025-07-12 20:07:54.230125 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.31s 2025-07-12 20:07:54.230136 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.03s 2025-07-12 20:07:54.230148 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.85s 2025-07-12 20:07:54.230159 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2025-07-12 20:07:54.230170 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.88s 2025-07-12 20:07:54.230181 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.77s 2025-07-12 20:07:54.230192 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.74s 2025-07-12 20:07:54.230203 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.69s 2025-07-12 20:07:54.230214 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2025-07-12 20:07:54.230225 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2025-07-12 20:07:54.230236 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-07-12 20:07:54.230246 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.62s 2025-07-12 20:07:54.230258 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.61s 2025-07-12 20:07:54.230268 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-07-12 20:07:54.230279 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.60s 2025-07-12 20:07:54.230299 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.60s 2025-07-12 20:07:54.230310 | orchestrator | 2025-07-12 20:07:54 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:54.230321 | orchestrator | 2025-07-12 20:07:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:07:57.283495 | orchestrator | 2025-07-12 20:07:57 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:07:57.285533 | orchestrator | 2025-07-12 20:07:57 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:07:57.286837 | orchestrator | 2025-07-12 20:07:57 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:07:57.287290 | orchestrator | 2025-07-12 20:07:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:00.326327 | orchestrator | 2025-07-12 20:08:00 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:00.328266 | orchestrator | 2025-07-12 20:08:00 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:00.329868 | orchestrator | 2025-07-12 20:08:00 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:00.330241 | orchestrator | 2025-07-12 20:08:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:03.370842 | orchestrator | 2025-07-12 20:08:03 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:03.373468 | orchestrator | 2025-07-12 20:08:03 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:03.375387 | orchestrator | 2025-07-12 20:08:03 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:03.375429 | orchestrator | 2025-07-12 20:08:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:06.426339 | orchestrator | 2025-07-12 20:08:06 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:06.428547 | orchestrator | 2025-07-12 20:08:06 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:06.430345 | orchestrator | 2025-07-12 20:08:06 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:06.430375 | orchestrator | 2025-07-12 20:08:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:09.486272 | orchestrator | 2025-07-12 20:08:09 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:09.486360 | orchestrator | 2025-07-12 20:08:09 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:09.486373 | orchestrator | 2025-07-12 20:08:09 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:09.486386 | orchestrator | 2025-07-12 20:08:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:12.541833 | orchestrator | 2025-07-12 20:08:12 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:12.544137 | orchestrator | 2025-07-12 20:08:12 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:12.545421 | orchestrator | 2025-07-12 20:08:12 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:12.545445 | orchestrator | 2025-07-12 20:08:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:15.595612 | orchestrator | 2025-07-12 20:08:15 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:15.597672 | orchestrator | 2025-07-12 20:08:15 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:15.599252 | orchestrator | 2025-07-12 20:08:15 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:15.599283 | orchestrator | 2025-07-12 20:08:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:18.638198 | orchestrator | 2025-07-12 20:08:18 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:18.639091 | orchestrator | 2025-07-12 20:08:18 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:18.640885 | orchestrator | 2025-07-12 20:08:18 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:18.640925 | orchestrator | 2025-07-12 20:08:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:21.691319 | orchestrator | 2025-07-12 20:08:21 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state STARTED 2025-07-12 20:08:21.693453 | orchestrator | 2025-07-12 20:08:21 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:21.697537 | orchestrator | 2025-07-12 20:08:21 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:21.697588 | orchestrator | 2025-07-12 20:08:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:24.742340 | orchestrator | 2025-07-12 20:08:24 | INFO  | Task dce19933-dc5f-4392-a74e-ba72d790a3e1 is in state SUCCESS 2025-07-12 20:08:24.743773 | orchestrator | 2025-07-12 20:08:24 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:24.745544 | orchestrator | 2025-07-12 20:08:24 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:24.746799 | orchestrator | 2025-07-12 20:08:24 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:24.746830 | orchestrator | 2025-07-12 20:08:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:27.787480 | orchestrator | 2025-07-12 20:08:27 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:27.789276 | orchestrator | 2025-07-12 20:08:27 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:27.790732 | orchestrator | 2025-07-12 20:08:27 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:27.790760 | orchestrator | 2025-07-12 20:08:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:30.826308 | orchestrator | 2025-07-12 20:08:30 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:30.826397 | orchestrator | 2025-07-12 20:08:30 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:30.827449 | orchestrator | 2025-07-12 20:08:30 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:30.827504 | orchestrator | 2025-07-12 20:08:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:33.868173 | orchestrator | 2025-07-12 20:08:33 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:33.869379 | orchestrator | 2025-07-12 20:08:33 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:33.870581 | orchestrator | 2025-07-12 20:08:33 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:33.870624 | orchestrator | 2025-07-12 20:08:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:36.906238 | orchestrator | 2025-07-12 20:08:36 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:36.907743 | orchestrator | 2025-07-12 20:08:36 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:36.909681 | orchestrator | 2025-07-12 20:08:36 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:36.909723 | orchestrator | 2025-07-12 20:08:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:39.949446 | orchestrator | 2025-07-12 20:08:39 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:39.950527 | orchestrator | 2025-07-12 20:08:39 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state STARTED 2025-07-12 20:08:39.952052 | orchestrator | 2025-07-12 20:08:39 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:39.952095 | orchestrator | 2025-07-12 20:08:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:42.989610 | orchestrator | 2025-07-12 20:08:42 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:42.991569 | orchestrator | 2025-07-12 20:08:42 | INFO  | Task 5aec4dbf-0648-4a02-8bab-8f13c1300fe0 is in state SUCCESS 2025-07-12 20:08:42.993816 | orchestrator | 2025-07-12 20:08:42.993905 | orchestrator | 2025-07-12 20:08:42.993921 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-12 20:08:42.993935 | orchestrator | 2025-07-12 20:08:42.993946 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-12 20:08:42.993958 | orchestrator | Saturday 12 July 2025 20:07:57 +0000 (0:00:00.154) 0:00:00.154 ********* 2025-07-12 20:08:42.993969 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-12 20:08:42.993982 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.993994 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994123 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:08:42.994142 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-12 20:08:42.994165 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-12 20:08:42.994176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-12 20:08:42.994188 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-12 20:08:42.994199 | orchestrator | 2025-07-12 20:08:42.994210 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-12 20:08:42.994227 | orchestrator | Saturday 12 July 2025 20:08:01 +0000 (0:00:04.340) 0:00:04.495 ********* 2025-07-12 20:08:42.994337 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 20:08:42.994366 | orchestrator | 2025-07-12 20:08:42.994383 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-12 20:08:42.994403 | orchestrator | Saturday 12 July 2025 20:08:02 +0000 (0:00:00.946) 0:00:05.441 ********* 2025-07-12 20:08:42.994424 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-12 20:08:42.994444 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994464 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994483 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:08:42.994496 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994509 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-12 20:08:42.994546 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-12 20:08:42.994559 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-12 20:08:42.994572 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-12 20:08:42.994584 | orchestrator | 2025-07-12 20:08:42.994597 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-12 20:08:42.994610 | orchestrator | Saturday 12 July 2025 20:08:15 +0000 (0:00:12.878) 0:00:18.320 ********* 2025-07-12 20:08:42.994622 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-12 20:08:42.994634 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994644 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994656 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:08:42.994666 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 20:08:42.994678 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-12 20:08:42.994689 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-12 20:08:42.994699 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-12 20:08:42.994710 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-12 20:08:42.994721 | orchestrator | 2025-07-12 20:08:42.994732 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:08:42.994744 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:08:42.994757 | orchestrator | 2025-07-12 20:08:42.994768 | orchestrator | 2025-07-12 20:08:42.994779 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:08:42.994790 | orchestrator | Saturday 12 July 2025 20:08:21 +0000 (0:00:06.389) 0:00:24.709 ********* 2025-07-12 20:08:42.994801 | orchestrator | =============================================================================== 2025-07-12 20:08:42.994811 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.88s 2025-07-12 20:08:42.994822 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.39s 2025-07-12 20:08:42.994833 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.34s 2025-07-12 20:08:42.994996 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2025-07-12 20:08:42.995048 | orchestrator | 2025-07-12 20:08:42.995061 | orchestrator | 2025-07-12 20:08:42.995072 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:08:42.995083 | orchestrator | 2025-07-12 20:08:42.995114 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:08:42.995126 | orchestrator | Saturday 12 July 2025 20:06:55 +0000 (0:00:00.268) 0:00:00.268 ********* 2025-07-12 20:08:42.995137 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.995149 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.995159 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.995170 | orchestrator | 2025-07-12 20:08:42.995181 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:08:42.995192 | orchestrator | Saturday 12 July 2025 20:06:56 +0000 (0:00:00.286) 0:00:00.555 ********* 2025-07-12 20:08:42.995204 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-12 20:08:42.995215 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-12 20:08:42.995312 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-12 20:08:42.995329 | orchestrator | 2025-07-12 20:08:42.995340 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-12 20:08:42.995351 | orchestrator | 2025-07-12 20:08:42.995363 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:08:42.995389 | orchestrator | Saturday 12 July 2025 20:06:56 +0000 (0:00:00.421) 0:00:00.976 ********* 2025-07-12 20:08:42.995401 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:08:42.995412 | orchestrator | 2025-07-12 20:08:42.995425 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-12 20:08:42.995444 | orchestrator | Saturday 12 July 2025 20:06:57 +0000 (0:00:00.516) 0:00:01.492 ********* 2025-07-12 20:08:42.995480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.995528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.995564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.995578 | orchestrator | 2025-07-12 20:08:42.995590 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-12 20:08:42.995602 | orchestrator | Saturday 12 July 2025 20:06:58 +0000 (0:00:01.046) 0:00:02.539 ********* 2025-07-12 20:08:42.995614 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.995625 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.995636 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.995647 | orchestrator | 2025-07-12 20:08:42.995658 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:08:42.995669 | orchestrator | Saturday 12 July 2025 20:06:58 +0000 (0:00:00.454) 0:00:02.994 ********* 2025-07-12 20:08:42.995680 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 20:08:42.995691 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 20:08:42.995709 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 20:08:42.995721 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 20:08:42.995739 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 20:08:42.995751 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 20:08:42.995762 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-12 20:08:42.995773 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 20:08:42.995784 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 20:08:42.995795 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 20:08:42.995806 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 20:08:42.995817 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 20:08:42.995828 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 20:08:42.995839 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 20:08:42.995851 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-12 20:08:42.995863 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 20:08:42.995874 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 20:08:42.995885 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 20:08:42.995901 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 20:08:42.995913 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 20:08:42.995924 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 20:08:42.995935 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 20:08:42.995948 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-12 20:08:42.995961 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 20:08:42.995975 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-12 20:08:42.995989 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-12 20:08:42.996071 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-12 20:08:42.996096 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-12 20:08:42.996116 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-12 20:08:42.996137 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-12 20:08:42.996158 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-12 20:08:42.996180 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-12 20:08:42.996202 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-12 20:08:42.996235 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-12 20:08:42.996247 | orchestrator | 2025-07-12 20:08:42.996259 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.996270 | orchestrator | Saturday 12 July 2025 20:06:59 +0000 (0:00:00.678) 0:00:03.672 ********* 2025-07-12 20:08:42.996282 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.996302 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.996315 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.996330 | orchestrator | 2025-07-12 20:08:42.996347 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.996362 | orchestrator | Saturday 12 July 2025 20:06:59 +0000 (0:00:00.303) 0:00:03.976 ********* 2025-07-12 20:08:42.996380 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996392 | orchestrator | 2025-07-12 20:08:42.996408 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.996437 | orchestrator | Saturday 12 July 2025 20:06:59 +0000 (0:00:00.130) 0:00:04.106 ********* 2025-07-12 20:08:42.996456 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996473 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.996489 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.996505 | orchestrator | 2025-07-12 20:08:42.996516 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.996525 | orchestrator | Saturday 12 July 2025 20:07:00 +0000 (0:00:00.435) 0:00:04.542 ********* 2025-07-12 20:08:42.996535 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.996545 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.996555 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.996565 | orchestrator | 2025-07-12 20:08:42.996575 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.996585 | orchestrator | Saturday 12 July 2025 20:07:00 +0000 (0:00:00.297) 0:00:04.839 ********* 2025-07-12 20:08:42.996595 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996604 | orchestrator | 2025-07-12 20:08:42.996614 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.996624 | orchestrator | Saturday 12 July 2025 20:07:00 +0000 (0:00:00.122) 0:00:04.962 ********* 2025-07-12 20:08:42.996634 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996644 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.996654 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.996664 | orchestrator | 2025-07-12 20:08:42.996674 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.996684 | orchestrator | Saturday 12 July 2025 20:07:00 +0000 (0:00:00.298) 0:00:05.261 ********* 2025-07-12 20:08:42.996693 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.996703 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.996713 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.996723 | orchestrator | 2025-07-12 20:08:42.996733 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.996743 | orchestrator | Saturday 12 July 2025 20:07:01 +0000 (0:00:00.303) 0:00:05.564 ********* 2025-07-12 20:08:42.996753 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996763 | orchestrator | 2025-07-12 20:08:42.996779 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.996789 | orchestrator | Saturday 12 July 2025 20:07:01 +0000 (0:00:00.322) 0:00:05.887 ********* 2025-07-12 20:08:42.996799 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996808 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.996818 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.996827 | orchestrator | 2025-07-12 20:08:42.996837 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.996847 | orchestrator | Saturday 12 July 2025 20:07:01 +0000 (0:00:00.290) 0:00:06.177 ********* 2025-07-12 20:08:42.996857 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.996874 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.996884 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.996894 | orchestrator | 2025-07-12 20:08:42.996904 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.996914 | orchestrator | Saturday 12 July 2025 20:07:02 +0000 (0:00:00.306) 0:00:06.484 ********* 2025-07-12 20:08:42.996931 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.996948 | orchestrator | 2025-07-12 20:08:42.996966 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.996985 | orchestrator | Saturday 12 July 2025 20:07:02 +0000 (0:00:00.134) 0:00:06.619 ********* 2025-07-12 20:08:42.996997 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997037 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.997047 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.997057 | orchestrator | 2025-07-12 20:08:42.997069 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.997086 | orchestrator | Saturday 12 July 2025 20:07:02 +0000 (0:00:00.290) 0:00:06.909 ********* 2025-07-12 20:08:42.997096 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.997106 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.997116 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.997126 | orchestrator | 2025-07-12 20:08:42.997136 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.997146 | orchestrator | Saturday 12 July 2025 20:07:03 +0000 (0:00:00.498) 0:00:07.408 ********* 2025-07-12 20:08:42.997156 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997165 | orchestrator | 2025-07-12 20:08:42.997175 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.997185 | orchestrator | Saturday 12 July 2025 20:07:03 +0000 (0:00:00.132) 0:00:07.540 ********* 2025-07-12 20:08:42.997194 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.997214 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.997224 | orchestrator | 2025-07-12 20:08:42.997234 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.997243 | orchestrator | Saturday 12 July 2025 20:07:03 +0000 (0:00:00.292) 0:00:07.833 ********* 2025-07-12 20:08:42.997253 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.997263 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.997273 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.997282 | orchestrator | 2025-07-12 20:08:42.997292 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.997302 | orchestrator | Saturday 12 July 2025 20:07:03 +0000 (0:00:00.333) 0:00:08.167 ********* 2025-07-12 20:08:42.997312 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997321 | orchestrator | 2025-07-12 20:08:42.997331 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.997341 | orchestrator | Saturday 12 July 2025 20:07:03 +0000 (0:00:00.142) 0:00:08.309 ********* 2025-07-12 20:08:42.997351 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997361 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.997371 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.997381 | orchestrator | 2025-07-12 20:08:42.997391 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.997400 | orchestrator | Saturday 12 July 2025 20:07:04 +0000 (0:00:00.461) 0:00:08.771 ********* 2025-07-12 20:08:42.997410 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.997420 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.997434 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.997451 | orchestrator | 2025-07-12 20:08:42.997478 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.997496 | orchestrator | Saturday 12 July 2025 20:07:04 +0000 (0:00:00.297) 0:00:09.068 ********* 2025-07-12 20:08:42.997507 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997517 | orchestrator | 2025-07-12 20:08:42.997527 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.997553 | orchestrator | Saturday 12 July 2025 20:07:04 +0000 (0:00:00.129) 0:00:09.198 ********* 2025-07-12 20:08:42.997564 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997574 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.997584 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.997594 | orchestrator | 2025-07-12 20:08:42.997607 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.997622 | orchestrator | Saturday 12 July 2025 20:07:05 +0000 (0:00:00.284) 0:00:09.483 ********* 2025-07-12 20:08:42.997632 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.997642 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.997652 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.997662 | orchestrator | 2025-07-12 20:08:42.997672 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.997682 | orchestrator | Saturday 12 July 2025 20:07:05 +0000 (0:00:00.346) 0:00:09.829 ********* 2025-07-12 20:08:42.997692 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997701 | orchestrator | 2025-07-12 20:08:42.997712 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.997722 | orchestrator | Saturday 12 July 2025 20:07:05 +0000 (0:00:00.122) 0:00:09.952 ********* 2025-07-12 20:08:42.997731 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997741 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.997757 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.997773 | orchestrator | 2025-07-12 20:08:42.997788 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.997810 | orchestrator | Saturday 12 July 2025 20:07:06 +0000 (0:00:00.505) 0:00:10.458 ********* 2025-07-12 20:08:42.997826 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.997842 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.997856 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.997870 | orchestrator | 2025-07-12 20:08:42.997884 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.997898 | orchestrator | Saturday 12 July 2025 20:07:06 +0000 (0:00:00.306) 0:00:10.765 ********* 2025-07-12 20:08:42.997912 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.997994 | orchestrator | 2025-07-12 20:08:42.998141 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.998159 | orchestrator | Saturday 12 July 2025 20:07:06 +0000 (0:00:00.125) 0:00:10.890 ********* 2025-07-12 20:08:42.998170 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998181 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.998191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.998201 | orchestrator | 2025-07-12 20:08:42.998210 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:08:42.998220 | orchestrator | Saturday 12 July 2025 20:07:06 +0000 (0:00:00.289) 0:00:11.179 ********* 2025-07-12 20:08:42.998229 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:42.998240 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:42.998249 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:42.998259 | orchestrator | 2025-07-12 20:08:42.998269 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:08:42.998279 | orchestrator | Saturday 12 July 2025 20:07:07 +0000 (0:00:00.469) 0:00:11.649 ********* 2025-07-12 20:08:42.998289 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998298 | orchestrator | 2025-07-12 20:08:42.998308 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:08:42.998318 | orchestrator | Saturday 12 July 2025 20:07:07 +0000 (0:00:00.141) 0:00:11.791 ********* 2025-07-12 20:08:42.998328 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998338 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.998347 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.998357 | orchestrator | 2025-07-12 20:08:42.998367 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-12 20:08:42.998388 | orchestrator | Saturday 12 July 2025 20:07:07 +0000 (0:00:00.309) 0:00:12.101 ********* 2025-07-12 20:08:42.998398 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:08:42.998408 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:08:42.998418 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:08:42.998427 | orchestrator | 2025-07-12 20:08:42.998437 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-12 20:08:42.998447 | orchestrator | Saturday 12 July 2025 20:07:09 +0000 (0:00:01.670) 0:00:13.772 ********* 2025-07-12 20:08:42.998457 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 20:08:42.998467 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 20:08:42.998476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 20:08:42.998486 | orchestrator | 2025-07-12 20:08:42.998496 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-12 20:08:42.998505 | orchestrator | Saturday 12 July 2025 20:07:11 +0000 (0:00:01.818) 0:00:15.590 ********* 2025-07-12 20:08:42.998515 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 20:08:42.998525 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 20:08:42.998535 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 20:08:42.998545 | orchestrator | 2025-07-12 20:08:42.998554 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-12 20:08:42.998564 | orchestrator | Saturday 12 July 2025 20:07:13 +0000 (0:00:02.103) 0:00:17.693 ********* 2025-07-12 20:08:42.998588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 20:08:42.998597 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 20:08:42.998605 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 20:08:42.998613 | orchestrator | 2025-07-12 20:08:42.998621 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-12 20:08:42.998629 | orchestrator | Saturday 12 July 2025 20:07:14 +0000 (0:00:01.671) 0:00:19.364 ********* 2025-07-12 20:08:42.998637 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998645 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.998652 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.998660 | orchestrator | 2025-07-12 20:08:42.998668 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-12 20:08:42.998676 | orchestrator | Saturday 12 July 2025 20:07:15 +0000 (0:00:00.294) 0:00:19.659 ********* 2025-07-12 20:08:42.998684 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998692 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.998700 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.998707 | orchestrator | 2025-07-12 20:08:42.998715 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:08:42.998723 | orchestrator | Saturday 12 July 2025 20:07:15 +0000 (0:00:00.321) 0:00:19.981 ********* 2025-07-12 20:08:42.998731 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:08:42.998739 | orchestrator | 2025-07-12 20:08:42.998747 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-12 20:08:42.998755 | orchestrator | Saturday 12 July 2025 20:07:16 +0000 (0:00:00.617) 0:00:20.598 ********* 2025-07-12 20:08:42.998772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.998804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.998820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.998829 | orchestrator | 2025-07-12 20:08:42.998838 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-12 20:08:42.998845 | orchestrator | Saturday 12 July 2025 20:07:17 +0000 (0:00:01.440) 0:00:22.039 ********* 2025-07-12 20:08:42.998866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:08:42.998881 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:08:42.998904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.998917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:08:42.998933 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.998941 | orchestrator | 2025-07-12 20:08:42.998949 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-12 20:08:42.998957 | orchestrator | Saturday 12 July 2025 20:07:18 +0000 (0:00:00.668) 0:00:22.708 ********* 2025-07-12 20:08:42.998973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:08:42.998982 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.998996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:08:42.999032 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.999049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:08:42.999059 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.999067 | orchestrator | 2025-07-12 20:08:42.999075 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-12 20:08:42.999088 | orchestrator | Saturday 12 July 2025 20:07:19 +0000 (0:00:00.886) 0:00:23.595 ********* 2025-07-12 20:08:42.999102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.999118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.999142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:08:42.999151 | orchestrator | 2025-07-12 20:08:42.999159 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:08:42.999167 | orchestrator | Saturday 12 July 2025 20:07:20 +0000 (0:00:01.182) 0:00:24.777 ********* 2025-07-12 20:08:42.999175 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:42.999183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:42.999191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:42.999199 | orchestrator | 2025-07-12 20:08:42.999207 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:08:42.999215 | orchestrator | Saturday 12 July 2025 20:07:20 +0000 (0:00:00.300) 0:00:25.078 ********* 2025-07-12 20:08:42.999223 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:08:42.999231 | orchestrator | 2025-07-12 20:08:42.999239 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-12 20:08:42.999247 | orchestrator | Saturday 12 July 2025 20:07:21 +0000 (0:00:00.717) 0:00:25.796 ********* 2025-07-12 20:08:42.999255 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:08:42.999263 | orchestrator | 2025-07-12 20:08:42.999276 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-12 20:08:42.999284 | orchestrator | Saturday 12 July 2025 20:07:23 +0000 (0:00:02.111) 0:00:27.907 ********* 2025-07-12 20:08:42.999292 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:08:42.999300 | orchestrator | 2025-07-12 20:08:42.999308 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-12 20:08:42.999321 | orchestrator | Saturday 12 July 2025 20:07:25 +0000 (0:00:02.160) 0:00:30.068 ********* 2025-07-12 20:08:42.999329 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:08:42.999337 | orchestrator | 2025-07-12 20:08:42.999345 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 20:08:42.999353 | orchestrator | Saturday 12 July 2025 20:07:41 +0000 (0:00:16.140) 0:00:46.208 ********* 2025-07-12 20:08:42.999361 | orchestrator | 2025-07-12 20:08:42.999369 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 20:08:42.999377 | orchestrator | Saturday 12 July 2025 20:07:41 +0000 (0:00:00.068) 0:00:46.277 ********* 2025-07-12 20:08:42.999385 | orchestrator | 2025-07-12 20:08:42.999393 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 20:08:42.999401 | orchestrator | Saturday 12 July 2025 20:07:41 +0000 (0:00:00.064) 0:00:46.341 ********* 2025-07-12 20:08:42.999408 | orchestrator | 2025-07-12 20:08:42.999416 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-12 20:08:42.999424 | orchestrator | Saturday 12 July 2025 20:07:41 +0000 (0:00:00.066) 0:00:46.407 ********* 2025-07-12 20:08:42.999432 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:08:42.999440 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:08:42.999448 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:08:42.999456 | orchestrator | 2025-07-12 20:08:42.999464 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:08:42.999477 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-12 20:08:42.999486 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 20:08:42.999494 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 20:08:42.999502 | orchestrator | 2025-07-12 20:08:42.999513 | orchestrator | 2025-07-12 20:08:42.999527 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:08:42.999535 | orchestrator | Saturday 12 July 2025 20:08:40 +0000 (0:00:58.290) 0:01:44.698 ********* 2025-07-12 20:08:42.999543 | orchestrator | =============================================================================== 2025-07-12 20:08:42.999551 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.29s 2025-07-12 20:08:42.999559 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.14s 2025-07-12 20:08:42.999567 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.16s 2025-07-12 20:08:42.999575 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2025-07-12 20:08:42.999583 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.10s 2025-07-12 20:08:42.999591 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.82s 2025-07-12 20:08:42.999599 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.67s 2025-07-12 20:08:42.999607 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.67s 2025-07-12 20:08:42.999615 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.44s 2025-07-12 20:08:42.999623 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.18s 2025-07-12 20:08:42.999631 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.05s 2025-07-12 20:08:42.999639 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.89s 2025-07-12 20:08:42.999661 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-07-12 20:08:42.999670 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-07-12 20:08:42.999678 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2025-07-12 20:08:42.999706 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-07-12 20:08:42.999715 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-07-12 20:08:42.999723 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-07-12 20:08:42.999731 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2025-07-12 20:08:42.999738 | orchestrator | horizon : Update policy file name --------------------------------------- 0.47s 2025-07-12 20:08:42.999746 | orchestrator | 2025-07-12 20:08:42 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:42.999754 | orchestrator | 2025-07-12 20:08:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:46.031465 | orchestrator | 2025-07-12 20:08:46 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:46.032689 | orchestrator | 2025-07-12 20:08:46 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:46.032732 | orchestrator | 2025-07-12 20:08:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:49.073852 | orchestrator | 2025-07-12 20:08:49 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:49.074913 | orchestrator | 2025-07-12 20:08:49 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:49.074945 | orchestrator | 2025-07-12 20:08:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:52.117929 | orchestrator | 2025-07-12 20:08:52 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:52.118870 | orchestrator | 2025-07-12 20:08:52 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:52.118887 | orchestrator | 2025-07-12 20:08:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:55.163713 | orchestrator | 2025-07-12 20:08:55 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:55.165531 | orchestrator | 2025-07-12 20:08:55 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:55.165550 | orchestrator | 2025-07-12 20:08:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:08:58.207926 | orchestrator | 2025-07-12 20:08:58 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:08:58.209418 | orchestrator | 2025-07-12 20:08:58 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:08:58.209459 | orchestrator | 2025-07-12 20:08:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:01.252077 | orchestrator | 2025-07-12 20:09:01 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:09:01.253646 | orchestrator | 2025-07-12 20:09:01 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:01.253696 | orchestrator | 2025-07-12 20:09:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:04.289780 | orchestrator | 2025-07-12 20:09:04 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:09:04.291768 | orchestrator | 2025-07-12 20:09:04 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:04.291861 | orchestrator | 2025-07-12 20:09:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:07.336999 | orchestrator | 2025-07-12 20:09:07 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:09:07.339793 | orchestrator | 2025-07-12 20:09:07 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:07.341506 | orchestrator | 2025-07-12 20:09:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:10.383029 | orchestrator | 2025-07-12 20:09:10 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:09:10.384921 | orchestrator | 2025-07-12 20:09:10 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:10.385073 | orchestrator | 2025-07-12 20:09:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:13.433667 | orchestrator | 2025-07-12 20:09:13 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state STARTED 2025-07-12 20:09:13.436219 | orchestrator | 2025-07-12 20:09:13 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:13.436435 | orchestrator | 2025-07-12 20:09:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:16.489204 | orchestrator | 2025-07-12 20:09:16 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:16.494522 | orchestrator | 2025-07-12 20:09:16 | INFO  | Task 8b2293f6-f877-4a3a-94f1-c60020735aae is in state SUCCESS 2025-07-12 20:09:16.494587 | orchestrator | 2025-07-12 20:09:16 | INFO  | Task 8915d2ec-9189-4afb-86b9-82bf6d943414 is in state STARTED 2025-07-12 20:09:16.494602 | orchestrator | 2025-07-12 20:09:16 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:16.496209 | orchestrator | 2025-07-12 20:09:16 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:16.496283 | orchestrator | 2025-07-12 20:09:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:19.553790 | orchestrator | 2025-07-12 20:09:19 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:19.555061 | orchestrator | 2025-07-12 20:09:19 | INFO  | Task 8915d2ec-9189-4afb-86b9-82bf6d943414 is in state STARTED 2025-07-12 20:09:19.555102 | orchestrator | 2025-07-12 20:09:19 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:19.555993 | orchestrator | 2025-07-12 20:09:19 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:19.556046 | orchestrator | 2025-07-12 20:09:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:22.590994 | orchestrator | 2025-07-12 20:09:22 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:22.592947 | orchestrator | 2025-07-12 20:09:22 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:22.594940 | orchestrator | 2025-07-12 20:09:22 | INFO  | Task 8915d2ec-9189-4afb-86b9-82bf6d943414 is in state SUCCESS 2025-07-12 20:09:22.597083 | orchestrator | 2025-07-12 20:09:22 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:22.600794 | orchestrator | 2025-07-12 20:09:22 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:22.602753 | orchestrator | 2025-07-12 20:09:22 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:22.602787 | orchestrator | 2025-07-12 20:09:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:25.646187 | orchestrator | 2025-07-12 20:09:25 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:25.646319 | orchestrator | 2025-07-12 20:09:25 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:25.646386 | orchestrator | 2025-07-12 20:09:25 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:25.646961 | orchestrator | 2025-07-12 20:09:25 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:25.647938 | orchestrator | 2025-07-12 20:09:25 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:25.647966 | orchestrator | 2025-07-12 20:09:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:28.671647 | orchestrator | 2025-07-12 20:09:28 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:28.672518 | orchestrator | 2025-07-12 20:09:28 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:28.675721 | orchestrator | 2025-07-12 20:09:28 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:28.677536 | orchestrator | 2025-07-12 20:09:28 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:28.680052 | orchestrator | 2025-07-12 20:09:28 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:28.680076 | orchestrator | 2025-07-12 20:09:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:31.717585 | orchestrator | 2025-07-12 20:09:31 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:31.720798 | orchestrator | 2025-07-12 20:09:31 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:31.723150 | orchestrator | 2025-07-12 20:09:31 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:31.728510 | orchestrator | 2025-07-12 20:09:31 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:31.729948 | orchestrator | 2025-07-12 20:09:31 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:31.730312 | orchestrator | 2025-07-12 20:09:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:34.772627 | orchestrator | 2025-07-12 20:09:34 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:34.772749 | orchestrator | 2025-07-12 20:09:34 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:34.772771 | orchestrator | 2025-07-12 20:09:34 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:34.772787 | orchestrator | 2025-07-12 20:09:34 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:34.773374 | orchestrator | 2025-07-12 20:09:34 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state STARTED 2025-07-12 20:09:34.773414 | orchestrator | 2025-07-12 20:09:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:37.823009 | orchestrator | 2025-07-12 20:09:37 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:37.823263 | orchestrator | 2025-07-12 20:09:37 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:37.824750 | orchestrator | 2025-07-12 20:09:37 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:37.825602 | orchestrator | 2025-07-12 20:09:37 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:37.827235 | orchestrator | 2025-07-12 20:09:37 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:37.828793 | orchestrator | 2025-07-12 20:09:37 | INFO  | Task 0e02dc7b-47eb-4bcd-b8a4-a25aa9a3c65f is in state SUCCESS 2025-07-12 20:09:37.829287 | orchestrator | 2025-07-12 20:09:37.829320 | orchestrator | 2025-07-12 20:09:37.829333 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-12 20:09:37.829372 | orchestrator | 2025-07-12 20:09:37.829384 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-12 20:09:37.829395 | orchestrator | Saturday 12 July 2025 20:08:26 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-07-12 20:09:37.829407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-12 20:09:37.829420 | orchestrator | 2025-07-12 20:09:37.829431 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-12 20:09:37.829442 | orchestrator | Saturday 12 July 2025 20:08:26 +0000 (0:00:00.233) 0:00:00.486 ********* 2025-07-12 20:09:37.829453 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-12 20:09:37.829464 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-12 20:09:37.829475 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-12 20:09:37.829487 | orchestrator | 2025-07-12 20:09:37.829498 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-12 20:09:37.829508 | orchestrator | Saturday 12 July 2025 20:08:27 +0000 (0:00:01.202) 0:00:01.688 ********* 2025-07-12 20:09:37.829534 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-12 20:09:37.829545 | orchestrator | 2025-07-12 20:09:37.829618 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-12 20:09:37.829750 | orchestrator | Saturday 12 July 2025 20:08:28 +0000 (0:00:01.115) 0:00:02.803 ********* 2025-07-12 20:09:37.830376 | orchestrator | changed: [testbed-manager] 2025-07-12 20:09:37.830418 | orchestrator | 2025-07-12 20:09:37.830436 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-12 20:09:37.830457 | orchestrator | Saturday 12 July 2025 20:08:29 +0000 (0:00:00.938) 0:00:03.742 ********* 2025-07-12 20:09:37.830474 | orchestrator | changed: [testbed-manager] 2025-07-12 20:09:37.830490 | orchestrator | 2025-07-12 20:09:37.830507 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-12 20:09:37.830525 | orchestrator | Saturday 12 July 2025 20:08:30 +0000 (0:00:00.910) 0:00:04.653 ********* 2025-07-12 20:09:37.830543 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-12 20:09:37.830562 | orchestrator | ok: [testbed-manager] 2025-07-12 20:09:37.830581 | orchestrator | 2025-07-12 20:09:37.830599 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-12 20:09:37.830618 | orchestrator | Saturday 12 July 2025 20:09:05 +0000 (0:00:35.182) 0:00:39.835 ********* 2025-07-12 20:09:37.830630 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-12 20:09:37.830641 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-12 20:09:37.830652 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-12 20:09:37.830663 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-12 20:09:37.830674 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-12 20:09:37.830685 | orchestrator | 2025-07-12 20:09:37.830699 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-12 20:09:37.830718 | orchestrator | Saturday 12 July 2025 20:09:09 +0000 (0:00:03.896) 0:00:43.732 ********* 2025-07-12 20:09:37.830736 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-12 20:09:37.830753 | orchestrator | 2025-07-12 20:09:37.830771 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-12 20:09:37.830789 | orchestrator | Saturday 12 July 2025 20:09:10 +0000 (0:00:00.447) 0:00:44.179 ********* 2025-07-12 20:09:37.830809 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:09:37.830828 | orchestrator | 2025-07-12 20:09:37.830841 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-12 20:09:37.830853 | orchestrator | Saturday 12 July 2025 20:09:10 +0000 (0:00:00.128) 0:00:44.308 ********* 2025-07-12 20:09:37.830874 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:09:37.830919 | orchestrator | 2025-07-12 20:09:37.830940 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-12 20:09:37.830960 | orchestrator | Saturday 12 July 2025 20:09:10 +0000 (0:00:00.309) 0:00:44.618 ********* 2025-07-12 20:09:37.830979 | orchestrator | changed: [testbed-manager] 2025-07-12 20:09:37.830997 | orchestrator | 2025-07-12 20:09:37.831015 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-12 20:09:37.831055 | orchestrator | Saturday 12 July 2025 20:09:12 +0000 (0:00:01.678) 0:00:46.296 ********* 2025-07-12 20:09:37.831068 | orchestrator | changed: [testbed-manager] 2025-07-12 20:09:37.831079 | orchestrator | 2025-07-12 20:09:37.831091 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-12 20:09:37.831102 | orchestrator | Saturday 12 July 2025 20:09:12 +0000 (0:00:00.729) 0:00:47.026 ********* 2025-07-12 20:09:37.831113 | orchestrator | changed: [testbed-manager] 2025-07-12 20:09:37.831124 | orchestrator | 2025-07-12 20:09:37.831135 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-12 20:09:37.831146 | orchestrator | Saturday 12 July 2025 20:09:13 +0000 (0:00:00.582) 0:00:47.608 ********* 2025-07-12 20:09:37.831157 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-12 20:09:37.831168 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-12 20:09:37.831179 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-12 20:09:37.831190 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-12 20:09:37.831201 | orchestrator | 2025-07-12 20:09:37.831212 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:09:37.831224 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:37.831237 | orchestrator | 2025-07-12 20:09:37.831248 | orchestrator | 2025-07-12 20:09:37.831276 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:09:37.831288 | orchestrator | Saturday 12 July 2025 20:09:14 +0000 (0:00:01.492) 0:00:49.101 ********* 2025-07-12 20:09:37.831299 | orchestrator | =============================================================================== 2025-07-12 20:09:37.831310 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.18s 2025-07-12 20:09:37.831321 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.90s 2025-07-12 20:09:37.831332 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.68s 2025-07-12 20:09:37.831343 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.49s 2025-07-12 20:09:37.831354 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.20s 2025-07-12 20:09:37.831365 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.12s 2025-07-12 20:09:37.831376 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2025-07-12 20:09:37.831387 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-07-12 20:09:37.831398 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.73s 2025-07-12 20:09:37.831419 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.58s 2025-07-12 20:09:37.831430 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.45s 2025-07-12 20:09:37.831441 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-07-12 20:09:37.831453 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-07-12 20:09:37.831463 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-07-12 20:09:37.831474 | orchestrator | 2025-07-12 20:09:37.831485 | orchestrator | 2025-07-12 20:09:37.831497 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:09:37.831508 | orchestrator | 2025-07-12 20:09:37.831518 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:09:37.831529 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-07-12 20:09:37.831551 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.831562 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.831573 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.831584 | orchestrator | 2025-07-12 20:09:37.831596 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:09:37.831607 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.309) 0:00:00.481 ********* 2025-07-12 20:09:37.831618 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 20:09:37.831629 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 20:09:37.831640 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 20:09:37.831651 | orchestrator | 2025-07-12 20:09:37.831662 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-12 20:09:37.831673 | orchestrator | 2025-07-12 20:09:37.831684 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-12 20:09:37.831695 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.679) 0:00:01.161 ********* 2025-07-12 20:09:37.831706 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.831717 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.831728 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.831739 | orchestrator | 2025-07-12 20:09:37.831750 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:09:37.831762 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:09:37.831774 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:09:37.831785 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:09:37.831796 | orchestrator | 2025-07-12 20:09:37.831807 | orchestrator | 2025-07-12 20:09:37.831818 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:09:37.831829 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.757) 0:00:01.919 ********* 2025-07-12 20:09:37.831840 | orchestrator | =============================================================================== 2025-07-12 20:09:37.831851 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.76s 2025-07-12 20:09:37.831862 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2025-07-12 20:09:37.831874 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-12 20:09:37.831886 | orchestrator | 2025-07-12 20:09:37.831897 | orchestrator | 2025-07-12 20:09:37.831907 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:09:37.831918 | orchestrator | 2025-07-12 20:09:37.831930 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:09:37.831941 | orchestrator | Saturday 12 July 2025 20:06:55 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-07-12 20:09:37.831952 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.831963 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.831976 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.831996 | orchestrator | 2025-07-12 20:09:37.832014 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:09:37.832059 | orchestrator | Saturday 12 July 2025 20:06:56 +0000 (0:00:00.295) 0:00:00.562 ********* 2025-07-12 20:09:37.832080 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 20:09:37.832097 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 20:09:37.832117 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 20:09:37.832136 | orchestrator | 2025-07-12 20:09:37.832154 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-12 20:09:37.832171 | orchestrator | 2025-07-12 20:09:37.832206 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:09:37.832228 | orchestrator | Saturday 12 July 2025 20:06:56 +0000 (0:00:00.434) 0:00:00.996 ********* 2025-07-12 20:09:37.832241 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:09:37.832261 | orchestrator | 2025-07-12 20:09:37.832280 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-12 20:09:37.832300 | orchestrator | Saturday 12 July 2025 20:06:57 +0000 (0:00:00.525) 0:00:01.522 ********* 2025-07-12 20:09:37.832335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.832365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.832390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.832409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832508 | orchestrator | 2025-07-12 20:09:37.832519 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-12 20:09:37.832530 | orchestrator | Saturday 12 July 2025 20:06:58 +0000 (0:00:01.729) 0:00:03.251 ********* 2025-07-12 20:09:37.832542 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-12 20:09:37.832553 | orchestrator | 2025-07-12 20:09:37.832564 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-12 20:09:37.832575 | orchestrator | Saturday 12 July 2025 20:06:59 +0000 (0:00:00.814) 0:00:04.066 ********* 2025-07-12 20:09:37.832586 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.832603 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.832614 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.832625 | orchestrator | 2025-07-12 20:09:37.832636 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-12 20:09:37.832647 | orchestrator | Saturday 12 July 2025 20:07:00 +0000 (0:00:00.451) 0:00:04.517 ********* 2025-07-12 20:09:37.832658 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:09:37.832669 | orchestrator | 2025-07-12 20:09:37.832680 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:09:37.832691 | orchestrator | Saturday 12 July 2025 20:07:00 +0000 (0:00:00.701) 0:00:05.218 ********* 2025-07-12 20:09:37.832702 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:09:37.832713 | orchestrator | 2025-07-12 20:09:37.832730 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-12 20:09:37.832741 | orchestrator | Saturday 12 July 2025 20:07:01 +0000 (0:00:00.513) 0:00:05.732 ********* 2025-07-12 20:09:37.832759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.832772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.832785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.832804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.832895 | orchestrator | 2025-07-12 20:09:37.832907 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-12 20:09:37.832924 | orchestrator | Saturday 12 July 2025 20:07:04 +0000 (0:00:03.529) 0:00:09.261 ********* 2025-07-12 20:09:37.832936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:09:37.832956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.832972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:09:37.832984 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.832996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:09:37.833008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.833181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:09:37.833223 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.833252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:09:37.833265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.833284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:09:37.833296 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.833307 | orchestrator | 2025-07-12 20:09:37.833318 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-12 20:09:37.833329 | orchestrator | Saturday 12 July 2025 20:07:05 +0000 (0:00:00.557) 0:00:09.819 ********* 2025-07-12 20:09:37.833341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:09:37.833363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.833375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:09:37.833386 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.833405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:09:37.833420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.833431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:09:37.833448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.833459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:09:37.833469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.833486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:09:37.833497 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.833507 | orchestrator | 2025-07-12 20:09:37.833517 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-12 20:09:37.833527 | orchestrator | Saturday 12 July 2025 20:07:06 +0000 (0:00:00.737) 0:00:10.556 ********* 2025-07-12 20:09:37.833542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.833553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.833962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.833983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.833995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834120 | orchestrator | 2025-07-12 20:09:37.834130 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-12 20:09:37.834140 | orchestrator | Saturday 12 July 2025 20:07:09 +0000 (0:00:03.633) 0:00:14.190 ********* 2025-07-12 20:09:37.834160 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.834171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.834187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.834206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.834222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.834233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.834244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834292 | orchestrator | 2025-07-12 20:09:37.834302 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-12 20:09:37.834312 | orchestrator | Saturday 12 July 2025 20:07:14 +0000 (0:00:05.118) 0:00:19.308 ********* 2025-07-12 20:09:37.834322 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.834332 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:09:37.834342 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:09:37.834351 | orchestrator | 2025-07-12 20:09:37.834361 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-12 20:09:37.834371 | orchestrator | Saturday 12 July 2025 20:07:16 +0000 (0:00:01.373) 0:00:20.681 ********* 2025-07-12 20:09:37.834381 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.834390 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.834400 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.834409 | orchestrator | 2025-07-12 20:09:37.834420 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-12 20:09:37.834429 | orchestrator | Saturday 12 July 2025 20:07:16 +0000 (0:00:00.613) 0:00:21.295 ********* 2025-07-12 20:09:37.834439 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.834449 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.834459 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.834468 | orchestrator | 2025-07-12 20:09:37.834478 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-12 20:09:37.834488 | orchestrator | Saturday 12 July 2025 20:07:17 +0000 (0:00:00.447) 0:00:21.743 ********* 2025-07-12 20:09:37.834498 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.834508 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.834518 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.834527 | orchestrator | 2025-07-12 20:09:37.834537 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-12 20:09:37.834547 | orchestrator | Saturday 12 July 2025 20:07:17 +0000 (0:00:00.260) 0:00:22.003 ********* 2025-07-12 20:09:37.834566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.834578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.834626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.834638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.834661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.834678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:09:37.834689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.834745 | orchestrator | 2025-07-12 20:09:37.834755 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:09:37.834765 | orchestrator | Saturday 12 July 2025 20:07:19 +0000 (0:00:02.294) 0:00:24.298 ********* 2025-07-12 20:09:37.834774 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.834784 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.834794 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.834804 | orchestrator | 2025-07-12 20:09:37.834828 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-12 20:09:37.834838 | orchestrator | Saturday 12 July 2025 20:07:20 +0000 (0:00:00.298) 0:00:24.597 ********* 2025-07-12 20:09:37.834848 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 20:09:37.834858 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 20:09:37.834868 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 20:09:37.834890 | orchestrator | 2025-07-12 20:09:37.834900 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-12 20:09:37.834909 | orchestrator | Saturday 12 July 2025 20:07:22 +0000 (0:00:01.904) 0:00:26.502 ********* 2025-07-12 20:09:37.834919 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:09:37.834929 | orchestrator | 2025-07-12 20:09:37.834938 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-12 20:09:37.834948 | orchestrator | Saturday 12 July 2025 20:07:23 +0000 (0:00:00.897) 0:00:27.399 ********* 2025-07-12 20:09:37.834958 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.834967 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.834977 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.834986 | orchestrator | 2025-07-12 20:09:37.834996 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-12 20:09:37.835006 | orchestrator | Saturday 12 July 2025 20:07:23 +0000 (0:00:00.507) 0:00:27.907 ********* 2025-07-12 20:09:37.835016 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 20:09:37.835046 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:09:37.835056 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 20:09:37.835066 | orchestrator | 2025-07-12 20:09:37.835076 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-12 20:09:37.835086 | orchestrator | Saturday 12 July 2025 20:07:24 +0000 (0:00:00.979) 0:00:28.887 ********* 2025-07-12 20:09:37.835101 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.835112 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.835122 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.835138 | orchestrator | 2025-07-12 20:09:37.835147 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-12 20:09:37.835157 | orchestrator | Saturday 12 July 2025 20:07:24 +0000 (0:00:00.315) 0:00:29.202 ********* 2025-07-12 20:09:37.835167 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 20:09:37.835177 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 20:09:37.835187 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 20:09:37.835196 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 20:09:37.835206 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 20:09:37.835216 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 20:09:37.835226 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 20:09:37.835236 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 20:09:37.835245 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 20:09:37.835255 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 20:09:37.835265 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 20:09:37.835275 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 20:09:37.835284 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 20:09:37.835294 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 20:09:37.835308 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 20:09:37.835318 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:09:37.835327 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:09:37.835337 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:09:37.835347 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:09:37.835357 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:09:37.835367 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:09:37.835376 | orchestrator | 2025-07-12 20:09:37.835386 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-12 20:09:37.835395 | orchestrator | Saturday 12 July 2025 20:07:34 +0000 (0:00:09.183) 0:00:38.385 ********* 2025-07-12 20:09:37.835405 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:09:37.835414 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:09:37.835424 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:09:37.835433 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:09:37.835443 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:09:37.835453 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:09:37.835463 | orchestrator | 2025-07-12 20:09:37.835472 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-12 20:09:37.835487 | orchestrator | Saturday 12 July 2025 20:07:36 +0000 (0:00:02.517) 0:00:40.903 ********* 2025-07-12 20:09:37.835503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.835515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.835530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:09:37.835541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.835552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.835568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:09:37.835584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.835595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.835610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:09:37.835621 | orchestrator | 2025-07-12 20:09:37.835630 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:09:37.835641 | orchestrator | Saturday 12 July 2025 20:07:38 +0000 (0:00:02.369) 0:00:43.272 ********* 2025-07-12 20:09:37.835651 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.835661 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.835671 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.835681 | orchestrator | 2025-07-12 20:09:37.835691 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-12 20:09:37.835701 | orchestrator | Saturday 12 July 2025 20:07:39 +0000 (0:00:00.316) 0:00:43.588 ********* 2025-07-12 20:09:37.835711 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.835720 | orchestrator | 2025-07-12 20:09:37.835730 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-12 20:09:37.835740 | orchestrator | Saturday 12 July 2025 20:07:41 +0000 (0:00:02.229) 0:00:45.818 ********* 2025-07-12 20:09:37.835750 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.835765 | orchestrator | 2025-07-12 20:09:37.835775 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-12 20:09:37.835785 | orchestrator | Saturday 12 July 2025 20:07:44 +0000 (0:00:02.679) 0:00:48.498 ********* 2025-07-12 20:09:37.835795 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.835805 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.835815 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.835825 | orchestrator | 2025-07-12 20:09:37.835835 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-12 20:09:37.835845 | orchestrator | Saturday 12 July 2025 20:07:45 +0000 (0:00:00.870) 0:00:49.369 ********* 2025-07-12 20:09:37.835855 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.835865 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.835874 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.835885 | orchestrator | 2025-07-12 20:09:37.835895 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-12 20:09:37.835905 | orchestrator | Saturday 12 July 2025 20:07:45 +0000 (0:00:00.345) 0:00:49.714 ********* 2025-07-12 20:09:37.835915 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.835925 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.835935 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.835945 | orchestrator | 2025-07-12 20:09:37.835955 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-12 20:09:37.835965 | orchestrator | Saturday 12 July 2025 20:07:45 +0000 (0:00:00.386) 0:00:50.101 ********* 2025-07-12 20:09:37.835975 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.835985 | orchestrator | 2025-07-12 20:09:37.835994 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-12 20:09:37.836004 | orchestrator | Saturday 12 July 2025 20:07:59 +0000 (0:00:14.184) 0:01:04.286 ********* 2025-07-12 20:09:37.836014 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.836024 | orchestrator | 2025-07-12 20:09:37.836060 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 20:09:37.836070 | orchestrator | Saturday 12 July 2025 20:08:10 +0000 (0:00:10.117) 0:01:14.403 ********* 2025-07-12 20:09:37.836080 | orchestrator | 2025-07-12 20:09:37.836090 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 20:09:37.836100 | orchestrator | Saturday 12 July 2025 20:08:10 +0000 (0:00:00.259) 0:01:14.662 ********* 2025-07-12 20:09:37.836110 | orchestrator | 2025-07-12 20:09:37.836120 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 20:09:37.836129 | orchestrator | Saturday 12 July 2025 20:08:10 +0000 (0:00:00.069) 0:01:14.731 ********* 2025-07-12 20:09:37.836139 | orchestrator | 2025-07-12 20:09:37.836154 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-12 20:09:37.836165 | orchestrator | Saturday 12 July 2025 20:08:10 +0000 (0:00:00.066) 0:01:14.798 ********* 2025-07-12 20:09:37.836175 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.836184 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:09:37.836194 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:09:37.836204 | orchestrator | 2025-07-12 20:09:37.836215 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-12 20:09:37.836225 | orchestrator | Saturday 12 July 2025 20:08:30 +0000 (0:00:19.964) 0:01:34.763 ********* 2025-07-12 20:09:37.836235 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.836245 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:09:37.836254 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:09:37.836264 | orchestrator | 2025-07-12 20:09:37.836286 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-12 20:09:37.836297 | orchestrator | Saturday 12 July 2025 20:08:40 +0000 (0:00:10.298) 0:01:45.062 ********* 2025-07-12 20:09:37.836307 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.836317 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:09:37.836327 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:09:37.836337 | orchestrator | 2025-07-12 20:09:37.836346 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:09:37.836363 | orchestrator | Saturday 12 July 2025 20:08:46 +0000 (0:00:06.105) 0:01:51.167 ********* 2025-07-12 20:09:37.836373 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:09:37.836383 | orchestrator | 2025-07-12 20:09:37.836393 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-12 20:09:37.836403 | orchestrator | Saturday 12 July 2025 20:08:47 +0000 (0:00:00.704) 0:01:51.872 ********* 2025-07-12 20:09:37.836413 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.836423 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:37.836433 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:37.836442 | orchestrator | 2025-07-12 20:09:37.836452 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-12 20:09:37.836462 | orchestrator | Saturday 12 July 2025 20:08:48 +0000 (0:00:00.764) 0:01:52.637 ********* 2025-07-12 20:09:37.836472 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:09:37.836482 | orchestrator | 2025-07-12 20:09:37.836492 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-12 20:09:37.836507 | orchestrator | Saturday 12 July 2025 20:08:50 +0000 (0:00:01.754) 0:01:54.391 ********* 2025-07-12 20:09:37.836517 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-12 20:09:37.836527 | orchestrator | 2025-07-12 20:09:37.836536 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-12 20:09:37.836546 | orchestrator | Saturday 12 July 2025 20:09:01 +0000 (0:00:11.013) 0:02:05.404 ********* 2025-07-12 20:09:37.836556 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-12 20:09:37.836565 | orchestrator | 2025-07-12 20:09:37.836575 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-12 20:09:37.836585 | orchestrator | Saturday 12 July 2025 20:09:23 +0000 (0:00:22.466) 0:02:27.871 ********* 2025-07-12 20:09:37.836594 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-12 20:09:37.836604 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-12 20:09:37.836614 | orchestrator | 2025-07-12 20:09:37.836623 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-12 20:09:37.836633 | orchestrator | Saturday 12 July 2025 20:09:31 +0000 (0:00:07.620) 0:02:35.492 ********* 2025-07-12 20:09:37.836642 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.836652 | orchestrator | 2025-07-12 20:09:37.836662 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-12 20:09:37.836671 | orchestrator | Saturday 12 July 2025 20:09:31 +0000 (0:00:00.271) 0:02:35.764 ********* 2025-07-12 20:09:37.836681 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.836691 | orchestrator | 2025-07-12 20:09:37.836700 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-12 20:09:37.836710 | orchestrator | Saturday 12 July 2025 20:09:31 +0000 (0:00:00.121) 0:02:35.885 ********* 2025-07-12 20:09:37.836719 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.836729 | orchestrator | 2025-07-12 20:09:37.836739 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-12 20:09:37.836748 | orchestrator | Saturday 12 July 2025 20:09:31 +0000 (0:00:00.145) 0:02:36.030 ********* 2025-07-12 20:09:37.836758 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.836767 | orchestrator | 2025-07-12 20:09:37.836777 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-12 20:09:37.836786 | orchestrator | Saturday 12 July 2025 20:09:32 +0000 (0:00:00.335) 0:02:36.366 ********* 2025-07-12 20:09:37.836796 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:37.836806 | orchestrator | 2025-07-12 20:09:37.836815 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:09:37.836825 | orchestrator | Saturday 12 July 2025 20:09:35 +0000 (0:00:03.486) 0:02:39.852 ********* 2025-07-12 20:09:37.836840 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:37.836850 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:37.836859 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:37.836869 | orchestrator | 2025-07-12 20:09:37.836879 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:09:37.836889 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-12 20:09:37.836899 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 20:09:37.836914 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 20:09:37.836925 | orchestrator | 2025-07-12 20:09:37.836934 | orchestrator | 2025-07-12 20:09:37.836944 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:09:37.836954 | orchestrator | Saturday 12 July 2025 20:09:36 +0000 (0:00:00.775) 0:02:40.628 ********* 2025-07-12 20:09:37.836963 | orchestrator | =============================================================================== 2025-07-12 20:09:37.836973 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.47s 2025-07-12 20:09:37.836983 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.96s 2025-07-12 20:09:37.836992 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.18s 2025-07-12 20:09:37.837002 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.01s 2025-07-12 20:09:37.837012 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.30s 2025-07-12 20:09:37.837021 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.12s 2025-07-12 20:09:37.837100 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.18s 2025-07-12 20:09:37.837111 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.62s 2025-07-12 20:09:37.837222 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.11s 2025-07-12 20:09:37.837233 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.12s 2025-07-12 20:09:37.837243 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.63s 2025-07-12 20:09:37.837252 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.53s 2025-07-12 20:09:37.837262 | orchestrator | keystone : Creating default user role ----------------------------------- 3.49s 2025-07-12 20:09:37.837271 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.68s 2025-07-12 20:09:37.837281 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.52s 2025-07-12 20:09:37.837291 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.37s 2025-07-12 20:09:37.837310 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.29s 2025-07-12 20:09:37.837320 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.23s 2025-07-12 20:09:37.837330 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.90s 2025-07-12 20:09:37.837340 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.75s 2025-07-12 20:09:37.837349 | orchestrator | 2025-07-12 20:09:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:40.886897 | orchestrator | 2025-07-12 20:09:40 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:40.887829 | orchestrator | 2025-07-12 20:09:40 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:40.889109 | orchestrator | 2025-07-12 20:09:40 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:40.890243 | orchestrator | 2025-07-12 20:09:40 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:40.891198 | orchestrator | 2025-07-12 20:09:40 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:40.891514 | orchestrator | 2025-07-12 20:09:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:43.942734 | orchestrator | 2025-07-12 20:09:43 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:43.944019 | orchestrator | 2025-07-12 20:09:43 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:43.945252 | orchestrator | 2025-07-12 20:09:43 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:43.947064 | orchestrator | 2025-07-12 20:09:43 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:43.949778 | orchestrator | 2025-07-12 20:09:43 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:43.949823 | orchestrator | 2025-07-12 20:09:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:46.991441 | orchestrator | 2025-07-12 20:09:46 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:46.993429 | orchestrator | 2025-07-12 20:09:46 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:46.997601 | orchestrator | 2025-07-12 20:09:46 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:47.000666 | orchestrator | 2025-07-12 20:09:46 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:47.001544 | orchestrator | 2025-07-12 20:09:46 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:47.001578 | orchestrator | 2025-07-12 20:09:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:50.041542 | orchestrator | 2025-07-12 20:09:50 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:50.041963 | orchestrator | 2025-07-12 20:09:50 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:50.042829 | orchestrator | 2025-07-12 20:09:50 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:50.045265 | orchestrator | 2025-07-12 20:09:50 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:50.046005 | orchestrator | 2025-07-12 20:09:50 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:50.046170 | orchestrator | 2025-07-12 20:09:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:53.084128 | orchestrator | 2025-07-12 20:09:53 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:53.084422 | orchestrator | 2025-07-12 20:09:53 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:53.086105 | orchestrator | 2025-07-12 20:09:53 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:53.086597 | orchestrator | 2025-07-12 20:09:53 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:53.089209 | orchestrator | 2025-07-12 20:09:53 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:53.089230 | orchestrator | 2025-07-12 20:09:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:56.121350 | orchestrator | 2025-07-12 20:09:56 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:56.121692 | orchestrator | 2025-07-12 20:09:56 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:56.123618 | orchestrator | 2025-07-12 20:09:56 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:56.124067 | orchestrator | 2025-07-12 20:09:56 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:56.124547 | orchestrator | 2025-07-12 20:09:56 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:56.125141 | orchestrator | 2025-07-12 20:09:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:09:59.147804 | orchestrator | 2025-07-12 20:09:59 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:09:59.147891 | orchestrator | 2025-07-12 20:09:59 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state STARTED 2025-07-12 20:09:59.148387 | orchestrator | 2025-07-12 20:09:59 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:09:59.149755 | orchestrator | 2025-07-12 20:09:59 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:09:59.150184 | orchestrator | 2025-07-12 20:09:59 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:09:59.150208 | orchestrator | 2025-07-12 20:09:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:02.181498 | orchestrator | 2025-07-12 20:10:02 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:02.182520 | orchestrator | 2025-07-12 20:10:02 | INFO  | Task c3791263-be21-4d3c-b76b-50b7164ca7cf is in state SUCCESS 2025-07-12 20:10:02.182563 | orchestrator | 2025-07-12 20:10:02 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:02.183095 | orchestrator | 2025-07-12 20:10:02 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:02.183885 | orchestrator | 2025-07-12 20:10:02 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:02.184168 | orchestrator | 2025-07-12 20:10:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:05.212897 | orchestrator | 2025-07-12 20:10:05 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:05.213644 | orchestrator | 2025-07-12 20:10:05 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:05.214907 | orchestrator | 2025-07-12 20:10:05 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:05.215865 | orchestrator | 2025-07-12 20:10:05 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:05.220505 | orchestrator | 2025-07-12 20:10:05 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:05.220633 | orchestrator | 2025-07-12 20:10:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:08.255800 | orchestrator | 2025-07-12 20:10:08 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:08.258331 | orchestrator | 2025-07-12 20:10:08 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:08.260210 | orchestrator | 2025-07-12 20:10:08 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:08.262263 | orchestrator | 2025-07-12 20:10:08 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:08.267292 | orchestrator | 2025-07-12 20:10:08 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:08.267330 | orchestrator | 2025-07-12 20:10:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:11.300565 | orchestrator | 2025-07-12 20:10:11 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:11.300780 | orchestrator | 2025-07-12 20:10:11 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:11.303242 | orchestrator | 2025-07-12 20:10:11 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:11.303712 | orchestrator | 2025-07-12 20:10:11 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:11.306844 | orchestrator | 2025-07-12 20:10:11 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:11.306871 | orchestrator | 2025-07-12 20:10:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:14.339669 | orchestrator | 2025-07-12 20:10:14 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:14.339762 | orchestrator | 2025-07-12 20:10:14 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:14.340119 | orchestrator | 2025-07-12 20:10:14 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:14.340601 | orchestrator | 2025-07-12 20:10:14 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:14.341145 | orchestrator | 2025-07-12 20:10:14 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:14.341165 | orchestrator | 2025-07-12 20:10:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:17.361860 | orchestrator | 2025-07-12 20:10:17 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:17.361959 | orchestrator | 2025-07-12 20:10:17 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:17.362278 | orchestrator | 2025-07-12 20:10:17 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:17.362786 | orchestrator | 2025-07-12 20:10:17 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:17.363315 | orchestrator | 2025-07-12 20:10:17 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:17.363354 | orchestrator | 2025-07-12 20:10:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:20.392443 | orchestrator | 2025-07-12 20:10:20 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:20.392680 | orchestrator | 2025-07-12 20:10:20 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:20.393543 | orchestrator | 2025-07-12 20:10:20 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:20.395172 | orchestrator | 2025-07-12 20:10:20 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:20.395660 | orchestrator | 2025-07-12 20:10:20 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:20.395684 | orchestrator | 2025-07-12 20:10:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:23.415309 | orchestrator | 2025-07-12 20:10:23 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:23.415580 | orchestrator | 2025-07-12 20:10:23 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:23.417268 | orchestrator | 2025-07-12 20:10:23 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:23.417765 | orchestrator | 2025-07-12 20:10:23 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:23.418652 | orchestrator | 2025-07-12 20:10:23 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:23.418695 | orchestrator | 2025-07-12 20:10:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:26.445349 | orchestrator | 2025-07-12 20:10:26 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:26.445441 | orchestrator | 2025-07-12 20:10:26 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:26.446917 | orchestrator | 2025-07-12 20:10:26 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:26.447381 | orchestrator | 2025-07-12 20:10:26 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:26.448168 | orchestrator | 2025-07-12 20:10:26 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:26.448192 | orchestrator | 2025-07-12 20:10:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:29.476544 | orchestrator | 2025-07-12 20:10:29 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:29.478282 | orchestrator | 2025-07-12 20:10:29 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:29.480117 | orchestrator | 2025-07-12 20:10:29 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:29.480706 | orchestrator | 2025-07-12 20:10:29 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:29.481479 | orchestrator | 2025-07-12 20:10:29 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:29.481516 | orchestrator | 2025-07-12 20:10:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:32.505288 | orchestrator | 2025-07-12 20:10:32 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:32.506833 | orchestrator | 2025-07-12 20:10:32 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:32.508251 | orchestrator | 2025-07-12 20:10:32 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:32.508862 | orchestrator | 2025-07-12 20:10:32 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:32.510738 | orchestrator | 2025-07-12 20:10:32 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:32.510777 | orchestrator | 2025-07-12 20:10:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:35.539258 | orchestrator | 2025-07-12 20:10:35 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:35.539573 | orchestrator | 2025-07-12 20:10:35 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:35.540214 | orchestrator | 2025-07-12 20:10:35 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:35.540745 | orchestrator | 2025-07-12 20:10:35 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:35.542659 | orchestrator | 2025-07-12 20:10:35 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:35.542700 | orchestrator | 2025-07-12 20:10:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:38.574372 | orchestrator | 2025-07-12 20:10:38 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:38.574734 | orchestrator | 2025-07-12 20:10:38 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:38.575699 | orchestrator | 2025-07-12 20:10:38 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:38.576024 | orchestrator | 2025-07-12 20:10:38 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:38.577773 | orchestrator | 2025-07-12 20:10:38 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:38.577799 | orchestrator | 2025-07-12 20:10:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:41.604610 | orchestrator | 2025-07-12 20:10:41 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:41.604714 | orchestrator | 2025-07-12 20:10:41 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:41.605293 | orchestrator | 2025-07-12 20:10:41 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:41.605964 | orchestrator | 2025-07-12 20:10:41 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:41.606530 | orchestrator | 2025-07-12 20:10:41 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:41.606555 | orchestrator | 2025-07-12 20:10:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:44.632710 | orchestrator | 2025-07-12 20:10:44 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:44.632814 | orchestrator | 2025-07-12 20:10:44 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:44.633299 | orchestrator | 2025-07-12 20:10:44 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:44.633818 | orchestrator | 2025-07-12 20:10:44 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:44.634411 | orchestrator | 2025-07-12 20:10:44 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:44.634437 | orchestrator | 2025-07-12 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:47.655707 | orchestrator | 2025-07-12 20:10:47 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:47.655945 | orchestrator | 2025-07-12 20:10:47 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:47.655974 | orchestrator | 2025-07-12 20:10:47 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:47.656567 | orchestrator | 2025-07-12 20:10:47 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:47.657141 | orchestrator | 2025-07-12 20:10:47 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:47.657167 | orchestrator | 2025-07-12 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:50.688626 | orchestrator | 2025-07-12 20:10:50 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:50.689315 | orchestrator | 2025-07-12 20:10:50 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:50.690197 | orchestrator | 2025-07-12 20:10:50 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:50.691700 | orchestrator | 2025-07-12 20:10:50 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:50.692464 | orchestrator | 2025-07-12 20:10:50 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:50.692491 | orchestrator | 2025-07-12 20:10:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:53.719057 | orchestrator | 2025-07-12 20:10:53 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:53.719716 | orchestrator | 2025-07-12 20:10:53 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:53.720172 | orchestrator | 2025-07-12 20:10:53 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:53.722437 | orchestrator | 2025-07-12 20:10:53 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:53.722931 | orchestrator | 2025-07-12 20:10:53 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:53.722953 | orchestrator | 2025-07-12 20:10:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:56.751103 | orchestrator | 2025-07-12 20:10:56 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:56.751195 | orchestrator | 2025-07-12 20:10:56 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:56.751211 | orchestrator | 2025-07-12 20:10:56 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:56.751224 | orchestrator | 2025-07-12 20:10:56 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state STARTED 2025-07-12 20:10:56.751235 | orchestrator | 2025-07-12 20:10:56 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:56.751246 | orchestrator | 2025-07-12 20:10:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:10:59.781244 | orchestrator | 2025-07-12 20:10:59 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:10:59.781447 | orchestrator | 2025-07-12 20:10:59 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:10:59.781965 | orchestrator | 2025-07-12 20:10:59 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:10:59.782366 | orchestrator | 2025-07-12 20:10:59 | INFO  | Task 5b817133-2d51-42f3-8487-85bc219e490d is in state SUCCESS 2025-07-12 20:10:59.782694 | orchestrator | 2025-07-12 20:10:59.782717 | orchestrator | 2025-07-12 20:10:59.782731 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:10:59.782745 | orchestrator | 2025-07-12 20:10:59.782759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:10:59.782773 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.344) 0:00:00.344 ********* 2025-07-12 20:10:59.782787 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:10:59.782801 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:10:59.782815 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:10:59.782828 | orchestrator | ok: [testbed-manager] 2025-07-12 20:10:59.782842 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:10:59.782854 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:10:59.782867 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:10:59.782880 | orchestrator | 2025-07-12 20:10:59.782893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:10:59.782907 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.864) 0:00:01.209 ********* 2025-07-12 20:10:59.782920 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.782935 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.782948 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.782962 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.782976 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.782990 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.783004 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-12 20:10:59.783018 | orchestrator | 2025-07-12 20:10:59.783032 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 20:10:59.783092 | orchestrator | 2025-07-12 20:10:59.783108 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-12 20:10:59.783121 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.705) 0:00:01.914 ********* 2025-07-12 20:10:59.783136 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:10:59.783149 | orchestrator | 2025-07-12 20:10:59.783176 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-12 20:10:59.783191 | orchestrator | Saturday 12 July 2025 20:09:30 +0000 (0:00:01.884) 0:00:03.799 ********* 2025-07-12 20:10:59.783204 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-12 20:10:59.783217 | orchestrator | 2025-07-12 20:10:59.783334 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-12 20:10:59.783348 | orchestrator | Saturday 12 July 2025 20:09:34 +0000 (0:00:04.365) 0:00:08.165 ********* 2025-07-12 20:10:59.783357 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-12 20:10:59.783366 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-12 20:10:59.783374 | orchestrator | 2025-07-12 20:10:59.783437 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-12 20:10:59.783449 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:06.576) 0:00:14.741 ********* 2025-07-12 20:10:59.783457 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:10:59.783465 | orchestrator | 2025-07-12 20:10:59.783474 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-12 20:10:59.783482 | orchestrator | Saturday 12 July 2025 20:09:44 +0000 (0:00:03.574) 0:00:18.316 ********* 2025-07-12 20:10:59.783490 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:10:59.783498 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-12 20:10:59.783506 | orchestrator | 2025-07-12 20:10:59.783514 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-12 20:10:59.783522 | orchestrator | Saturday 12 July 2025 20:09:49 +0000 (0:00:04.654) 0:00:22.971 ********* 2025-07-12 20:10:59.783529 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:10:59.783537 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-12 20:10:59.783545 | orchestrator | 2025-07-12 20:10:59.783553 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-12 20:10:59.783561 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:06.778) 0:00:29.750 ********* 2025-07-12 20:10:59.783569 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-12 20:10:59.783577 | orchestrator | 2025-07-12 20:10:59.783585 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:10:59.783593 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783601 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783610 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783618 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783626 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783644 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783662 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.783670 | orchestrator | 2025-07-12 20:10:59.783678 | orchestrator | 2025-07-12 20:10:59.783686 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:10:59.783694 | orchestrator | Saturday 12 July 2025 20:10:01 +0000 (0:00:04.938) 0:00:34.688 ********* 2025-07-12 20:10:59.783701 | orchestrator | =============================================================================== 2025-07-12 20:10:59.783708 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.78s 2025-07-12 20:10:59.783714 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.58s 2025-07-12 20:10:59.783721 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.94s 2025-07-12 20:10:59.783728 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.65s 2025-07-12 20:10:59.783734 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.37s 2025-07-12 20:10:59.783741 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.57s 2025-07-12 20:10:59.783751 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.88s 2025-07-12 20:10:59.783761 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2025-07-12 20:10:59.783768 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-07-12 20:10:59.783775 | orchestrator | 2025-07-12 20:10:59.783781 | orchestrator | 2025-07-12 20:10:59.783788 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-12 20:10:59.783795 | orchestrator | 2025-07-12 20:10:59.783801 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-12 20:10:59.783808 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-07-12 20:10:59.783814 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783821 | orchestrator | 2025-07-12 20:10:59.783828 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-12 20:10:59.783839 | orchestrator | Saturday 12 July 2025 20:09:21 +0000 (0:00:02.159) 0:00:02.422 ********* 2025-07-12 20:10:59.783846 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783852 | orchestrator | 2025-07-12 20:10:59.783859 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-12 20:10:59.783866 | orchestrator | Saturday 12 July 2025 20:09:22 +0000 (0:00:01.026) 0:00:03.448 ********* 2025-07-12 20:10:59.783872 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783879 | orchestrator | 2025-07-12 20:10:59.783886 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-12 20:10:59.783892 | orchestrator | Saturday 12 July 2025 20:09:23 +0000 (0:00:01.048) 0:00:04.496 ********* 2025-07-12 20:10:59.783899 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783906 | orchestrator | 2025-07-12 20:10:59.783912 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-12 20:10:59.783919 | orchestrator | Saturday 12 July 2025 20:09:24 +0000 (0:00:01.292) 0:00:05.789 ********* 2025-07-12 20:10:59.783927 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783938 | orchestrator | 2025-07-12 20:10:59.783946 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-12 20:10:59.783957 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:01.197) 0:00:06.987 ********* 2025-07-12 20:10:59.783964 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783971 | orchestrator | 2025-07-12 20:10:59.783977 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-12 20:10:59.783984 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.963) 0:00:07.951 ********* 2025-07-12 20:10:59.783991 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.783997 | orchestrator | 2025-07-12 20:10:59.784005 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-12 20:10:59.784021 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:01.154) 0:00:09.105 ********* 2025-07-12 20:10:59.784028 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.784034 | orchestrator | 2025-07-12 20:10:59.784041 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-12 20:10:59.784048 | orchestrator | Saturday 12 July 2025 20:09:29 +0000 (0:00:01.110) 0:00:10.216 ********* 2025-07-12 20:10:59.784054 | orchestrator | changed: [testbed-manager] 2025-07-12 20:10:59.784153 | orchestrator | 2025-07-12 20:10:59.784176 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-12 20:10:59.784184 | orchestrator | Saturday 12 July 2025 20:10:32 +0000 (0:01:03.097) 0:01:13.314 ********* 2025-07-12 20:10:59.784190 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:10:59.784197 | orchestrator | 2025-07-12 20:10:59.784204 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 20:10:59.784211 | orchestrator | 2025-07-12 20:10:59.784217 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 20:10:59.784224 | orchestrator | Saturday 12 July 2025 20:10:32 +0000 (0:00:00.147) 0:01:13.461 ********* 2025-07-12 20:10:59.784231 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:10:59.784237 | orchestrator | 2025-07-12 20:10:59.784244 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 20:10:59.784250 | orchestrator | 2025-07-12 20:10:59.784268 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 20:10:59.784275 | orchestrator | Saturday 12 July 2025 20:10:44 +0000 (0:00:11.675) 0:01:25.137 ********* 2025-07-12 20:10:59.784282 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:10:59.784295 | orchestrator | 2025-07-12 20:10:59.784302 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 20:10:59.784309 | orchestrator | 2025-07-12 20:10:59.784316 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 20:10:59.784322 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:01.359) 0:01:26.497 ********* 2025-07-12 20:10:59.784329 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:10:59.784336 | orchestrator | 2025-07-12 20:10:59.784352 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:10:59.784359 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:10:59.784366 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.784379 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.784386 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:10:59.784393 | orchestrator | 2025-07-12 20:10:59.784400 | orchestrator | 2025-07-12 20:10:59.784406 | orchestrator | 2025-07-12 20:10:59.784420 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:10:59.784427 | orchestrator | Saturday 12 July 2025 20:10:56 +0000 (0:00:11.148) 0:01:37.645 ********* 2025-07-12 20:10:59.784434 | orchestrator | =============================================================================== 2025-07-12 20:10:59.784440 | orchestrator | Create admin user ------------------------------------------------------ 63.10s 2025-07-12 20:10:59.784447 | orchestrator | Restart ceph manager service ------------------------------------------- 24.18s 2025-07-12 20:10:59.784454 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.16s 2025-07-12 20:10:59.784460 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2025-07-12 20:10:59.784467 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.20s 2025-07-12 20:10:59.784474 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.15s 2025-07-12 20:10:59.784488 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.11s 2025-07-12 20:10:59.784495 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.05s 2025-07-12 20:10:59.784506 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.03s 2025-07-12 20:10:59.784513 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.96s 2025-07-12 20:10:59.784520 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-07-12 20:10:59.784526 | orchestrator | 2025-07-12 20:10:59 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:10:59.784533 | orchestrator | 2025-07-12 20:10:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:02.809164 | orchestrator | 2025-07-12 20:11:02 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:02.810233 | orchestrator | 2025-07-12 20:11:02 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:02.812145 | orchestrator | 2025-07-12 20:11:02 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:02.813958 | orchestrator | 2025-07-12 20:11:02 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:02.813985 | orchestrator | 2025-07-12 20:11:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:05.849575 | orchestrator | 2025-07-12 20:11:05 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:05.850569 | orchestrator | 2025-07-12 20:11:05 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:05.852279 | orchestrator | 2025-07-12 20:11:05 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:05.853284 | orchestrator | 2025-07-12 20:11:05 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:05.853310 | orchestrator | 2025-07-12 20:11:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:08.897637 | orchestrator | 2025-07-12 20:11:08 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:08.898228 | orchestrator | 2025-07-12 20:11:08 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:08.899534 | orchestrator | 2025-07-12 20:11:08 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:08.901126 | orchestrator | 2025-07-12 20:11:08 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:08.901149 | orchestrator | 2025-07-12 20:11:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:11.937676 | orchestrator | 2025-07-12 20:11:11 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:11.942239 | orchestrator | 2025-07-12 20:11:11 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:11.945092 | orchestrator | 2025-07-12 20:11:11 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:11.947017 | orchestrator | 2025-07-12 20:11:11 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:11.947056 | orchestrator | 2025-07-12 20:11:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:14.997483 | orchestrator | 2025-07-12 20:11:14 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:14.999285 | orchestrator | 2025-07-12 20:11:14 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:15.001905 | orchestrator | 2025-07-12 20:11:14 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:15.003395 | orchestrator | 2025-07-12 20:11:15 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:15.003437 | orchestrator | 2025-07-12 20:11:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:18.059688 | orchestrator | 2025-07-12 20:11:18 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:18.060167 | orchestrator | 2025-07-12 20:11:18 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:18.060860 | orchestrator | 2025-07-12 20:11:18 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:18.061619 | orchestrator | 2025-07-12 20:11:18 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:18.061642 | orchestrator | 2025-07-12 20:11:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:21.088281 | orchestrator | 2025-07-12 20:11:21 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:21.091772 | orchestrator | 2025-07-12 20:11:21 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:21.093300 | orchestrator | 2025-07-12 20:11:21 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:21.093336 | orchestrator | 2025-07-12 20:11:21 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:21.093351 | orchestrator | 2025-07-12 20:11:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:24.126197 | orchestrator | 2025-07-12 20:11:24 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:24.127122 | orchestrator | 2025-07-12 20:11:24 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:24.127994 | orchestrator | 2025-07-12 20:11:24 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:24.129495 | orchestrator | 2025-07-12 20:11:24 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:24.129545 | orchestrator | 2025-07-12 20:11:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:27.174705 | orchestrator | 2025-07-12 20:11:27 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:27.176048 | orchestrator | 2025-07-12 20:11:27 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:27.178249 | orchestrator | 2025-07-12 20:11:27 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:27.179518 | orchestrator | 2025-07-12 20:11:27 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:27.179549 | orchestrator | 2025-07-12 20:11:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:30.221533 | orchestrator | 2025-07-12 20:11:30 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:30.223028 | orchestrator | 2025-07-12 20:11:30 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:30.225181 | orchestrator | 2025-07-12 20:11:30 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:30.227027 | orchestrator | 2025-07-12 20:11:30 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:30.227064 | orchestrator | 2025-07-12 20:11:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:33.265294 | orchestrator | 2025-07-12 20:11:33 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:33.268529 | orchestrator | 2025-07-12 20:11:33 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:33.269340 | orchestrator | 2025-07-12 20:11:33 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:33.270349 | orchestrator | 2025-07-12 20:11:33 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:33.270382 | orchestrator | 2025-07-12 20:11:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:36.319751 | orchestrator | 2025-07-12 20:11:36 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:36.320963 | orchestrator | 2025-07-12 20:11:36 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:36.323048 | orchestrator | 2025-07-12 20:11:36 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:36.324634 | orchestrator | 2025-07-12 20:11:36 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:36.325183 | orchestrator | 2025-07-12 20:11:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:39.381276 | orchestrator | 2025-07-12 20:11:39 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:39.383224 | orchestrator | 2025-07-12 20:11:39 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:39.385597 | orchestrator | 2025-07-12 20:11:39 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:39.387871 | orchestrator | 2025-07-12 20:11:39 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:39.387934 | orchestrator | 2025-07-12 20:11:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:42.432680 | orchestrator | 2025-07-12 20:11:42 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:42.433344 | orchestrator | 2025-07-12 20:11:42 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:42.435286 | orchestrator | 2025-07-12 20:11:42 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:42.437342 | orchestrator | 2025-07-12 20:11:42 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:42.437378 | orchestrator | 2025-07-12 20:11:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:45.481795 | orchestrator | 2025-07-12 20:11:45 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:45.482239 | orchestrator | 2025-07-12 20:11:45 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:45.483179 | orchestrator | 2025-07-12 20:11:45 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:45.483917 | orchestrator | 2025-07-12 20:11:45 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:45.483961 | orchestrator | 2025-07-12 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:48.522845 | orchestrator | 2025-07-12 20:11:48 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:48.523501 | orchestrator | 2025-07-12 20:11:48 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:48.526576 | orchestrator | 2025-07-12 20:11:48 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:48.526656 | orchestrator | 2025-07-12 20:11:48 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:48.526701 | orchestrator | 2025-07-12 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:51.586634 | orchestrator | 2025-07-12 20:11:51 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:51.587822 | orchestrator | 2025-07-12 20:11:51 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:51.589559 | orchestrator | 2025-07-12 20:11:51 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:51.591182 | orchestrator | 2025-07-12 20:11:51 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:51.591399 | orchestrator | 2025-07-12 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:54.642182 | orchestrator | 2025-07-12 20:11:54 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:54.642748 | orchestrator | 2025-07-12 20:11:54 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:54.644008 | orchestrator | 2025-07-12 20:11:54 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:54.645362 | orchestrator | 2025-07-12 20:11:54 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:54.645387 | orchestrator | 2025-07-12 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:11:57.667151 | orchestrator | 2025-07-12 20:11:57 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:11:57.667390 | orchestrator | 2025-07-12 20:11:57 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:11:57.668132 | orchestrator | 2025-07-12 20:11:57 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:11:57.668760 | orchestrator | 2025-07-12 20:11:57 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:11:57.668830 | orchestrator | 2025-07-12 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:00.692431 | orchestrator | 2025-07-12 20:12:00 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:00.692521 | orchestrator | 2025-07-12 20:12:00 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:00.695013 | orchestrator | 2025-07-12 20:12:00 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:00.695398 | orchestrator | 2025-07-12 20:12:00 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:00.695431 | orchestrator | 2025-07-12 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:03.733019 | orchestrator | 2025-07-12 20:12:03 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:03.733105 | orchestrator | 2025-07-12 20:12:03 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:03.733129 | orchestrator | 2025-07-12 20:12:03 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:03.733139 | orchestrator | 2025-07-12 20:12:03 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:03.733147 | orchestrator | 2025-07-12 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:06.766735 | orchestrator | 2025-07-12 20:12:06 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:06.773277 | orchestrator | 2025-07-12 20:12:06 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:06.777418 | orchestrator | 2025-07-12 20:12:06 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:06.778888 | orchestrator | 2025-07-12 20:12:06 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:06.781299 | orchestrator | 2025-07-12 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:09.819383 | orchestrator | 2025-07-12 20:12:09 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:09.819482 | orchestrator | 2025-07-12 20:12:09 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:09.819885 | orchestrator | 2025-07-12 20:12:09 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:09.820459 | orchestrator | 2025-07-12 20:12:09 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:09.820572 | orchestrator | 2025-07-12 20:12:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:12.860739 | orchestrator | 2025-07-12 20:12:12 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:12.862151 | orchestrator | 2025-07-12 20:12:12 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:12.863929 | orchestrator | 2025-07-12 20:12:12 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:12.865951 | orchestrator | 2025-07-12 20:12:12 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:12.866174 | orchestrator | 2025-07-12 20:12:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:15.903899 | orchestrator | 2025-07-12 20:12:15 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:15.905023 | orchestrator | 2025-07-12 20:12:15 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:15.905841 | orchestrator | 2025-07-12 20:12:15 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:15.906774 | orchestrator | 2025-07-12 20:12:15 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:15.906832 | orchestrator | 2025-07-12 20:12:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:18.942196 | orchestrator | 2025-07-12 20:12:18 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:18.943532 | orchestrator | 2025-07-12 20:12:18 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:18.945184 | orchestrator | 2025-07-12 20:12:18 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:18.946524 | orchestrator | 2025-07-12 20:12:18 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state STARTED 2025-07-12 20:12:18.946584 | orchestrator | 2025-07-12 20:12:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:21.995202 | orchestrator | 2025-07-12 20:12:21 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:21.996233 | orchestrator | 2025-07-12 20:12:21 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:21.997384 | orchestrator | 2025-07-12 20:12:21 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:21.998601 | orchestrator | 2025-07-12 20:12:21 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:22.001407 | orchestrator | 2025-07-12 20:12:21 | INFO  | Task 1cdf5dc0-c629-443c-8e1f-ef1006d13ca8 is in state SUCCESS 2025-07-12 20:12:22.005200 | orchestrator | 2025-07-12 20:12:22.005310 | orchestrator | 2025-07-12 20:12:22.005339 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:12:22.005423 | orchestrator | 2025-07-12 20:12:22.005474 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:12:22.005493 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.253) 0:00:00.253 ********* 2025-07-12 20:12:22.005512 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:12:22.005532 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:12:22.005550 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:12:22.005568 | orchestrator | 2025-07-12 20:12:22.005605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:12:22.005625 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.322) 0:00:00.575 ********* 2025-07-12 20:12:22.005645 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-12 20:12:22.005665 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-12 20:12:22.005685 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-12 20:12:22.005705 | orchestrator | 2025-07-12 20:12:22.005726 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-12 20:12:22.005747 | orchestrator | 2025-07-12 20:12:22.005764 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:12:22.005777 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.418) 0:00:00.993 ********* 2025-07-12 20:12:22.005790 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:12:22.005804 | orchestrator | 2025-07-12 20:12:22.005816 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-12 20:12:22.005829 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.504) 0:00:01.497 ********* 2025-07-12 20:12:22.005842 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-12 20:12:22.005855 | orchestrator | 2025-07-12 20:12:22.005868 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-12 20:12:22.005879 | orchestrator | Saturday 12 July 2025 20:09:32 +0000 (0:00:04.174) 0:00:05.672 ********* 2025-07-12 20:12:22.005890 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-12 20:12:22.005902 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-12 20:12:22.005913 | orchestrator | 2025-07-12 20:12:22.005924 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-12 20:12:22.005935 | orchestrator | Saturday 12 July 2025 20:09:39 +0000 (0:00:06.949) 0:00:12.621 ********* 2025-07-12 20:12:22.005946 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-12 20:12:22.005957 | orchestrator | 2025-07-12 20:12:22.005968 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-12 20:12:22.005979 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:03.697) 0:00:16.319 ********* 2025-07-12 20:12:22.005991 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:12:22.006002 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-12 20:12:22.006013 | orchestrator | 2025-07-12 20:12:22.006232 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-12 20:12:22.006255 | orchestrator | Saturday 12 July 2025 20:09:48 +0000 (0:00:04.968) 0:00:21.287 ********* 2025-07-12 20:12:22.006273 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:12:22.006292 | orchestrator | 2025-07-12 20:12:22.006310 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-12 20:12:22.006329 | orchestrator | Saturday 12 July 2025 20:09:51 +0000 (0:00:03.724) 0:00:25.012 ********* 2025-07-12 20:12:22.006348 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-12 20:12:22.006367 | orchestrator | 2025-07-12 20:12:22.006385 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-12 20:12:22.006403 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:04.834) 0:00:29.846 ********* 2025-07-12 20:12:22.006496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.006518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.006532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.006554 | orchestrator | 2025-07-12 20:12:22.006566 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:12:22.006577 | orchestrator | Saturday 12 July 2025 20:10:00 +0000 (0:00:04.292) 0:00:34.138 ********* 2025-07-12 20:12:22.006589 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:12:22.006600 | orchestrator | 2025-07-12 20:12:22.006619 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-12 20:12:22.006630 | orchestrator | Saturday 12 July 2025 20:10:01 +0000 (0:00:00.582) 0:00:34.721 ********* 2025-07-12 20:12:22.006642 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.006654 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:22.006665 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:22.006676 | orchestrator | 2025-07-12 20:12:22.006687 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-12 20:12:22.006704 | orchestrator | Saturday 12 July 2025 20:10:05 +0000 (0:00:03.583) 0:00:38.304 ********* 2025-07-12 20:12:22.006716 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:12:22.006728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:12:22.006739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:12:22.006750 | orchestrator | 2025-07-12 20:12:22.006761 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-12 20:12:22.006772 | orchestrator | Saturday 12 July 2025 20:10:06 +0000 (0:00:01.565) 0:00:39.869 ********* 2025-07-12 20:12:22.006784 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:12:22.006820 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:12:22.006841 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:12:22.006906 | orchestrator | 2025-07-12 20:12:22.006926 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-12 20:12:22.006945 | orchestrator | Saturday 12 July 2025 20:10:07 +0000 (0:00:01.197) 0:00:41.067 ********* 2025-07-12 20:12:22.006963 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:12:22.006982 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:12:22.007000 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:12:22.007019 | orchestrator | 2025-07-12 20:12:22.007038 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-12 20:12:22.007055 | orchestrator | Saturday 12 July 2025 20:10:08 +0000 (0:00:00.798) 0:00:41.866 ********* 2025-07-12 20:12:22.007095 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.007116 | orchestrator | 2025-07-12 20:12:22.007149 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-12 20:12:22.007169 | orchestrator | Saturday 12 July 2025 20:10:08 +0000 (0:00:00.123) 0:00:41.989 ********* 2025-07-12 20:12:22.007248 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.007272 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.007291 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.007310 | orchestrator | 2025-07-12 20:12:22.007330 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:12:22.007349 | orchestrator | Saturday 12 July 2025 20:10:08 +0000 (0:00:00.257) 0:00:42.246 ********* 2025-07-12 20:12:22.007369 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:12:22.007388 | orchestrator | 2025-07-12 20:12:22.007406 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-12 20:12:22.007424 | orchestrator | Saturday 12 July 2025 20:10:09 +0000 (0:00:00.507) 0:00:42.754 ********* 2025-07-12 20:12:22.007462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.007497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.007548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.007562 | orchestrator | 2025-07-12 20:12:22.007573 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-12 20:12:22.007584 | orchestrator | Saturday 12 July 2025 20:10:15 +0000 (0:00:05.858) 0:00:48.612 ********* 2025-07-12 20:12:22.007613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:12:22.007635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.007648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:12:22.007660 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.007687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:12:22.007701 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.007712 | orchestrator | 2025-07-12 20:12:22.007723 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-12 20:12:22.007734 | orchestrator | Saturday 12 July 2025 20:10:18 +0000 (0:00:02.652) 0:00:51.265 ********* 2025-07-12 20:12:22.007747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:12:22.007765 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.007785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:12:22.007798 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.007815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:12:22.007833 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.007845 | orchestrator | 2025-07-12 20:12:22.007856 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-12 20:12:22.007867 | orchestrator | Saturday 12 July 2025 20:10:21 +0000 (0:00:03.952) 0:00:55.217 ********* 2025-07-12 20:12:22.007878 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.007889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.007900 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.007911 | orchestrator | 2025-07-12 20:12:22.007922 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-12 20:12:22.007933 | orchestrator | Saturday 12 July 2025 20:10:25 +0000 (0:00:03.591) 0:00:58.809 ********* 2025-07-12 20:12:22.007951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.007970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.007992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.008014 | orchestrator | 2025-07-12 20:12:22.008032 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-12 20:12:22.008049 | orchestrator | Saturday 12 July 2025 20:10:29 +0000 (0:00:04.249) 0:01:03.058 ********* 2025-07-12 20:12:22.008068 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.008205 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:22.008227 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:22.008244 | orchestrator | 2025-07-12 20:12:22.008262 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-12 20:12:22.008280 | orchestrator | Saturday 12 July 2025 20:10:37 +0000 (0:00:07.914) 0:01:10.973 ********* 2025-07-12 20:12:22.008297 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008314 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008333 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008350 | orchestrator | 2025-07-12 20:12:22.008370 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-12 20:12:22.008401 | orchestrator | Saturday 12 July 2025 20:10:43 +0000 (0:00:06.058) 0:01:17.036 ********* 2025-07-12 20:12:22.008437 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008455 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008475 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008494 | orchestrator | 2025-07-12 20:12:22.008512 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-12 20:12:22.008531 | orchestrator | Saturday 12 July 2025 20:10:47 +0000 (0:00:03.753) 0:01:20.790 ********* 2025-07-12 20:12:22.008552 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008563 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008574 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008585 | orchestrator | 2025-07-12 20:12:22.008596 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-12 20:12:22.008607 | orchestrator | Saturday 12 July 2025 20:10:53 +0000 (0:00:06.406) 0:01:27.196 ********* 2025-07-12 20:12:22.008618 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008630 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008641 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008652 | orchestrator | 2025-07-12 20:12:22.008663 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-12 20:12:22.008674 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:04.995) 0:01:32.192 ********* 2025-07-12 20:12:22.008685 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008696 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008706 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008718 | orchestrator | 2025-07-12 20:12:22.008729 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-12 20:12:22.008740 | orchestrator | Saturday 12 July 2025 20:10:59 +0000 (0:00:00.257) 0:01:32.450 ********* 2025-07-12 20:12:22.008751 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 20:12:22.008763 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008773 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 20:12:22.008783 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008793 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 20:12:22.008803 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008812 | orchestrator | 2025-07-12 20:12:22.008822 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-12 20:12:22.008832 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:03.576) 0:01:36.026 ********* 2025-07-12 20:12:22.008843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.008878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.008891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:12:22.008902 | orchestrator | 2025-07-12 20:12:22.008912 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:12:22.008922 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:04.247) 0:01:40.274 ********* 2025-07-12 20:12:22.008939 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:22.008949 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:22.008959 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:22.008968 | orchestrator | 2025-07-12 20:12:22.008978 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-12 20:12:22.008988 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:00.205) 0:01:40.479 ********* 2025-07-12 20:12:22.008998 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.009008 | orchestrator | 2025-07-12 20:12:22.009018 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-12 20:12:22.009028 | orchestrator | Saturday 12 July 2025 20:11:09 +0000 (0:00:02.062) 0:01:42.541 ********* 2025-07-12 20:12:22.009038 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.009048 | orchestrator | 2025-07-12 20:12:22.009058 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-12 20:12:22.009069 | orchestrator | Saturday 12 July 2025 20:11:11 +0000 (0:00:02.263) 0:01:44.805 ********* 2025-07-12 20:12:22.009117 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.009133 | orchestrator | 2025-07-12 20:12:22.009148 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-12 20:12:22.009164 | orchestrator | Saturday 12 July 2025 20:11:13 +0000 (0:00:02.092) 0:01:46.898 ********* 2025-07-12 20:12:22.009180 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.009196 | orchestrator | 2025-07-12 20:12:22.009212 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-12 20:12:22.009227 | orchestrator | Saturday 12 July 2025 20:11:42 +0000 (0:00:28.743) 0:02:15.641 ********* 2025-07-12 20:12:22.009243 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.009259 | orchestrator | 2025-07-12 20:12:22.009284 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 20:12:22.009301 | orchestrator | Saturday 12 July 2025 20:11:44 +0000 (0:00:02.579) 0:02:18.221 ********* 2025-07-12 20:12:22.009318 | orchestrator | 2025-07-12 20:12:22.009335 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 20:12:22.009351 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.090) 0:02:18.311 ********* 2025-07-12 20:12:22.009368 | orchestrator | 2025-07-12 20:12:22.009393 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 20:12:22.009411 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.097) 0:02:18.409 ********* 2025-07-12 20:12:22.009427 | orchestrator | 2025-07-12 20:12:22.009444 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-12 20:12:22.009455 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.070) 0:02:18.479 ********* 2025-07-12 20:12:22.009482 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:22.009492 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:22.009502 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:22.009511 | orchestrator | 2025-07-12 20:12:22.009521 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:12:22.009532 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:12:22.009543 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:12:22.009553 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:12:22.009563 | orchestrator | 2025-07-12 20:12:22.009573 | orchestrator | 2025-07-12 20:12:22.009583 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:12:22.009593 | orchestrator | Saturday 12 July 2025 20:12:18 +0000 (0:00:33.535) 0:02:52.015 ********* 2025-07-12 20:12:22.009602 | orchestrator | =============================================================================== 2025-07-12 20:12:22.009627 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.54s 2025-07-12 20:12:22.009637 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.74s 2025-07-12 20:12:22.009647 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.91s 2025-07-12 20:12:22.009657 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.95s 2025-07-12 20:12:22.009667 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.41s 2025-07-12 20:12:22.009676 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 6.06s 2025-07-12 20:12:22.009686 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 5.86s 2025-07-12 20:12:22.009696 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.00s 2025-07-12 20:12:22.009706 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.97s 2025-07-12 20:12:22.009716 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.83s 2025-07-12 20:12:22.009726 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.29s 2025-07-12 20:12:22.009736 | orchestrator | glance : Copying over config.json files for services -------------------- 4.25s 2025-07-12 20:12:22.009746 | orchestrator | glance : Check glance containers ---------------------------------------- 4.25s 2025-07-12 20:12:22.009756 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.17s 2025-07-12 20:12:22.009766 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.95s 2025-07-12 20:12:22.009775 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.75s 2025-07-12 20:12:22.009792 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.72s 2025-07-12 20:12:22.009809 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.70s 2025-07-12 20:12:22.009825 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.59s 2025-07-12 20:12:22.009841 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.58s 2025-07-12 20:12:22.009860 | orchestrator | 2025-07-12 20:12:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:25.057288 | orchestrator | 2025-07-12 20:12:25 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:25.062691 | orchestrator | 2025-07-12 20:12:25 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:25.063993 | orchestrator | 2025-07-12 20:12:25 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:25.065386 | orchestrator | 2025-07-12 20:12:25 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:25.065415 | orchestrator | 2025-07-12 20:12:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:28.115712 | orchestrator | 2025-07-12 20:12:28 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:28.116914 | orchestrator | 2025-07-12 20:12:28 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:28.118299 | orchestrator | 2025-07-12 20:12:28 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:28.122814 | orchestrator | 2025-07-12 20:12:28 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:28.122878 | orchestrator | 2025-07-12 20:12:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:31.173507 | orchestrator | 2025-07-12 20:12:31 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:31.174647 | orchestrator | 2025-07-12 20:12:31 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:31.175818 | orchestrator | 2025-07-12 20:12:31 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:31.177233 | orchestrator | 2025-07-12 20:12:31 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:31.177472 | orchestrator | 2025-07-12 20:12:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:34.219953 | orchestrator | 2025-07-12 20:12:34 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:34.220772 | orchestrator | 2025-07-12 20:12:34 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:34.222689 | orchestrator | 2025-07-12 20:12:34 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:34.224301 | orchestrator | 2025-07-12 20:12:34 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:34.224331 | orchestrator | 2025-07-12 20:12:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:37.266174 | orchestrator | 2025-07-12 20:12:37 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:37.268952 | orchestrator | 2025-07-12 20:12:37 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:37.271129 | orchestrator | 2025-07-12 20:12:37 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:37.273222 | orchestrator | 2025-07-12 20:12:37 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:37.273262 | orchestrator | 2025-07-12 20:12:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:40.309821 | orchestrator | 2025-07-12 20:12:40 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:40.311249 | orchestrator | 2025-07-12 20:12:40 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:40.311686 | orchestrator | 2025-07-12 20:12:40 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:40.312599 | orchestrator | 2025-07-12 20:12:40 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:40.312636 | orchestrator | 2025-07-12 20:12:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:43.361703 | orchestrator | 2025-07-12 20:12:43 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:43.365680 | orchestrator | 2025-07-12 20:12:43 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:43.367870 | orchestrator | 2025-07-12 20:12:43 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state STARTED 2025-07-12 20:12:43.370142 | orchestrator | 2025-07-12 20:12:43 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:43.370182 | orchestrator | 2025-07-12 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:46.407338 | orchestrator | 2025-07-12 20:12:46 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:46.409855 | orchestrator | 2025-07-12 20:12:46 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:46.415102 | orchestrator | 2025-07-12 20:12:46 | INFO  | Task bc2f9004-2120-466b-b513-3d92429bf2e4 is in state SUCCESS 2025-07-12 20:12:46.417221 | orchestrator | 2025-07-12 20:12:46.417251 | orchestrator | 2025-07-12 20:12:46.417257 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:12:46.417263 | orchestrator | 2025-07-12 20:12:46.417269 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:12:46.417274 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.277) 0:00:00.277 ********* 2025-07-12 20:12:46.417294 | orchestrator | ok: [testbed-manager] 2025-07-12 20:12:46.417301 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:12:46.417311 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:12:46.417316 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:12:46.417321 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:12:46.417330 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:12:46.417337 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:12:46.417342 | orchestrator | 2025-07-12 20:12:46.417347 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:12:46.417353 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.870) 0:00:01.148 ********* 2025-07-12 20:12:46.417358 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417363 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417368 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417373 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417386 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417392 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417397 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-12 20:12:46.417402 | orchestrator | 2025-07-12 20:12:46.417407 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-12 20:12:46.417412 | orchestrator | 2025-07-12 20:12:46.417417 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 20:12:46.417422 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.847) 0:00:01.995 ********* 2025-07-12 20:12:46.417428 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:12:46.417434 | orchestrator | 2025-07-12 20:12:46.417440 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-12 20:12:46.417445 | orchestrator | Saturday 12 July 2025 20:09:22 +0000 (0:00:01.634) 0:00:03.630 ********* 2025-07-12 20:12:46.417452 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:12:46.417460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417466 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417475 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417510 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417517 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417528 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417551 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417566 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:12:46.417573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417587 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417661 | orchestrator | 2025-07-12 20:12:46.417667 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 20:12:46.417672 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:03.956) 0:00:07.587 ********* 2025-07-12 20:12:46.417678 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:12:46.417683 | orchestrator | 2025-07-12 20:12:46.417688 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-12 20:12:46.417694 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:01.462) 0:00:09.049 ********* 2025-07-12 20:12:46.417699 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:12:46.417705 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417734 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417748 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.417753 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417781 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417786 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417805 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:12:46.417817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417846 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417856 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.417884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.417905 | orchestrator | 2025-07-12 20:12:46.417911 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-12 20:12:46.417917 | orchestrator | Saturday 12 July 2025 20:09:34 +0000 (0:00:06.189) 0:00:15.239 ********* 2025-07-12 20:12:46.417926 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:12:46.417935 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.417941 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.417950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:12:46.417962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.417968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.417976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.417983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.417994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418139 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.418147 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.418153 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.418159 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.418169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418187 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418194 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.418200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418218 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.418224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418245 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.418250 | orchestrator | 2025-07-12 20:12:46.418256 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-12 20:12:46.418265 | orchestrator | Saturday 12 July 2025 20:09:35 +0000 (0:00:01.594) 0:00:16.834 ********* 2025-07-12 20:12:46.418272 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:12:46.418278 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418283 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418289 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:12:46.418295 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418336 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.418342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418353 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.418358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:12:46.418412 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.418417 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.418425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418454 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.418459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418464 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418475 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.418480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:12:46.418486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418497 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:12:46.418502 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.418508 | orchestrator | 2025-07-12 20:12:46.418513 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-12 20:12:46.418518 | orchestrator | Saturday 12 July 2025 20:09:37 +0000 (0:00:02.184) 0:00:19.019 ********* 2025-07-12 20:12:46.418529 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:12:46.418538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418543 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418583 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.418589 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418636 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:12:46.418642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418675 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418683 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.418699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.418718 | orchestrator | 2025-07-12 20:12:46.418723 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-12 20:12:46.418728 | orchestrator | Saturday 12 July 2025 20:09:44 +0000 (0:00:07.075) 0:00:26.094 ********* 2025-07-12 20:12:46.418733 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:12:46.418739 | orchestrator | 2025-07-12 20:12:46.418744 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-12 20:12:46.418751 | orchestrator | Saturday 12 July 2025 20:09:45 +0000 (0:00:00.882) 0:00:26.976 ********* 2025-07-12 20:12:46.418757 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418765 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418771 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418776 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418785 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.418790 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418801 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418809 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418818 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1056022, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.010993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418824 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418829 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418838 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418844 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418852 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418857 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418865 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418871 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418876 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418884 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418890 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418898 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1056350, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1519957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.418904 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418912 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418917 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418926 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418931 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418937 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418942 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418951 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418959 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418964 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418973 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418978 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418984 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.418989 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1056016, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0089931, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419003 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419011 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419017 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419027 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419032 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419038 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419043 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419051 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419059 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419065 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419086 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419092 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419097 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419103 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419111 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1056035, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419119 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419128 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419139 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419144 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419149 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419157 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419166 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419176 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419181 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419187 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419192 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419197 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419207 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419212 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419223 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1056010, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419229 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419234 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419239 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419245 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419253 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419258 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419270 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419275 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419281 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419286 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419291 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419300 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419308 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419316 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419321 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1056023, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419327 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419332 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419337 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419345 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419354 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419366 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419376 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419385 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419392 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419399 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.419411 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419428 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419456 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419465 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419475 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1056033, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0149932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419512 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419521 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419533 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419543 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419553 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419564 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.419573 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1056025, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0119932, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419582 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419601 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419611 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419624 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419633 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419639 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.419644 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419650 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.419655 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419666 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419674 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419680 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1056018, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.009993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419688 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419693 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419699 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.419704 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:12:46.419709 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.419714 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056346, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1509955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419723 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056004, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.004993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419731 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1056360, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419737 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1056345, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1499956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419744 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1056012, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.007993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419750 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1056007, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.006993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419755 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1056032, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0139933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419761 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1056030, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.012993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419769 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1056359, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.1549957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:12:46.419775 | orchestrator | 2025-07-12 20:12:46.419782 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-12 20:12:46.419791 | orchestrator | Saturday 12 July 2025 20:10:08 +0000 (0:00:22.897) 0:00:49.874 ********* 2025-07-12 20:12:46.419796 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:12:46.419801 | orchestrator | 2025-07-12 20:12:46.419807 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-12 20:12:46.419812 | orchestrator | Saturday 12 July 2025 20:10:09 +0000 (0:00:00.646) 0:00:50.521 ********* 2025-07-12 20:12:46.419820 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.419826 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419831 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.419836 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419842 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.419847 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:12:46.419852 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.419857 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419862 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.419868 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419873 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.419878 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.419883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419888 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.419896 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419901 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.419906 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.419911 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419917 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.419922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419927 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.419932 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.419937 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419942 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.419947 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419953 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.419958 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.419966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419971 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.419976 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.419982 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.419991 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.420000 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.420008 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-12 20:12:46.420017 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:12:46.420026 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-12 20:12:46.420034 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:12:46.420044 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 20:12:46.420055 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 20:12:46.420064 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:12:46.420113 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:12:46.420123 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:12:46.420132 | orchestrator | 2025-07-12 20:12:46.420141 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-12 20:12:46.420150 | orchestrator | Saturday 12 July 2025 20:10:12 +0000 (0:00:02.936) 0:00:53.457 ********* 2025-07-12 20:12:46.420159 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:12:46.420168 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:12:46.420177 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420182 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420187 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:12:46.420192 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420197 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:12:46.420206 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420215 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:12:46.420223 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420232 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:12:46.420240 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420248 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-12 20:12:46.420257 | orchestrator | 2025-07-12 20:12:46.420266 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-12 20:12:46.420275 | orchestrator | Saturday 12 July 2025 20:10:29 +0000 (0:00:17.513) 0:01:10.971 ********* 2025-07-12 20:12:46.420284 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:12:46.420292 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420301 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:12:46.420316 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420324 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:12:46.420333 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:12:46.420341 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420349 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420358 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:12:46.420373 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420382 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:12:46.420391 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420400 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-12 20:12:46.420408 | orchestrator | 2025-07-12 20:12:46.420417 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-12 20:12:46.420426 | orchestrator | Saturday 12 July 2025 20:10:33 +0000 (0:00:04.056) 0:01:15.027 ********* 2025-07-12 20:12:46.420440 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:12:46.420449 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-12 20:12:46.420457 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420467 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:12:46.420475 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420484 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:12:46.420492 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420501 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:12:46.420509 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420518 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:12:46.420527 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420535 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:12:46.420543 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420552 | orchestrator | 2025-07-12 20:12:46.420560 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-12 20:12:46.420569 | orchestrator | Saturday 12 July 2025 20:10:36 +0000 (0:00:02.347) 0:01:17.374 ********* 2025-07-12 20:12:46.420574 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:12:46.420579 | orchestrator | 2025-07-12 20:12:46.420584 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-12 20:12:46.420589 | orchestrator | Saturday 12 July 2025 20:10:36 +0000 (0:00:00.572) 0:01:17.946 ********* 2025-07-12 20:12:46.420593 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.420599 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420607 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420615 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420623 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420631 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420639 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420648 | orchestrator | 2025-07-12 20:12:46.420656 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-12 20:12:46.420665 | orchestrator | Saturday 12 July 2025 20:10:37 +0000 (0:00:00.653) 0:01:18.599 ********* 2025-07-12 20:12:46.420673 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.420681 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420689 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420696 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420704 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:46.420712 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:46.420720 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:46.420728 | orchestrator | 2025-07-12 20:12:46.420737 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-12 20:12:46.420747 | orchestrator | Saturday 12 July 2025 20:10:40 +0000 (0:00:02.577) 0:01:21.177 ********* 2025-07-12 20:12:46.420752 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420757 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.420762 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420767 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420772 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420776 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420781 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420786 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420791 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420796 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420805 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420810 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420815 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:12:46.420820 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420825 | orchestrator | 2025-07-12 20:12:46.420830 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-12 20:12:46.420835 | orchestrator | Saturday 12 July 2025 20:10:42 +0000 (0:00:02.253) 0:01:23.430 ********* 2025-07-12 20:12:46.420839 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:12:46.420844 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:12:46.420849 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420854 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420859 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-12 20:12:46.420864 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:12:46.420869 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420874 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:12:46.420878 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420883 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:12:46.420888 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420893 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:12:46.420898 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420902 | orchestrator | 2025-07-12 20:12:46.420907 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-12 20:12:46.420912 | orchestrator | Saturday 12 July 2025 20:10:44 +0000 (0:00:02.030) 0:01:25.460 ********* 2025-07-12 20:12:46.420917 | orchestrator | [WARNING]: Skipped 2025-07-12 20:12:46.420922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-12 20:12:46.420927 | orchestrator | due to this access issue: 2025-07-12 20:12:46.420931 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-12 20:12:46.420936 | orchestrator | not a directory 2025-07-12 20:12:46.420941 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:12:46.420946 | orchestrator | 2025-07-12 20:12:46.420951 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-12 20:12:46.420959 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:01.137) 0:01:26.598 ********* 2025-07-12 20:12:46.420964 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.420969 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.420973 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.420978 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.420983 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.420988 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.420993 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.420997 | orchestrator | 2025-07-12 20:12:46.421002 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-12 20:12:46.421007 | orchestrator | Saturday 12 July 2025 20:10:46 +0000 (0:00:00.978) 0:01:27.576 ********* 2025-07-12 20:12:46.421012 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.421017 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:46.421021 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:46.421026 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:46.421031 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:46.421036 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:46.421041 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:46.421045 | orchestrator | 2025-07-12 20:12:46.421050 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-12 20:12:46.421055 | orchestrator | Saturday 12 July 2025 20:10:47 +0000 (0:00:01.046) 0:01:28.622 ********* 2025-07-12 20:12:46.421061 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:12:46.421082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421108 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421122 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421164 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:12:46.421175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:12:46.421264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421300 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421321 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:12:46.421326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:12:46.421331 | orchestrator | 2025-07-12 20:12:46.421336 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-12 20:12:46.421341 | orchestrator | Saturday 12 July 2025 20:10:52 +0000 (0:00:04.922) 0:01:33.545 ********* 2025-07-12 20:12:46.421346 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 20:12:46.421351 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:46.421359 | orchestrator | 2025-07-12 20:12:46.421367 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421375 | orchestrator | Saturday 12 July 2025 20:10:54 +0000 (0:00:01.922) 0:01:35.467 ********* 2025-07-12 20:12:46.421383 | orchestrator | 2025-07-12 20:12:46.421392 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421398 | orchestrator | Saturday 12 July 2025 20:10:54 +0000 (0:00:00.479) 0:01:35.947 ********* 2025-07-12 20:12:46.421402 | orchestrator | 2025-07-12 20:12:46.421407 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421412 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:00.262) 0:01:36.209 ********* 2025-07-12 20:12:46.421417 | orchestrator | 2025-07-12 20:12:46.421422 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421427 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:00.171) 0:01:36.381 ********* 2025-07-12 20:12:46.421432 | orchestrator | 2025-07-12 20:12:46.421437 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421441 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:00.184) 0:01:36.565 ********* 2025-07-12 20:12:46.421446 | orchestrator | 2025-07-12 20:12:46.421451 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421456 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:00.157) 0:01:36.723 ********* 2025-07-12 20:12:46.421461 | orchestrator | 2025-07-12 20:12:46.421465 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:12:46.421470 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:00.169) 0:01:36.892 ********* 2025-07-12 20:12:46.421475 | orchestrator | 2025-07-12 20:12:46.421480 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-12 20:12:46.421485 | orchestrator | Saturday 12 July 2025 20:10:56 +0000 (0:00:00.300) 0:01:37.193 ********* 2025-07-12 20:12:46.421493 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:46.421497 | orchestrator | 2025-07-12 20:12:46.421502 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-12 20:12:46.421507 | orchestrator | Saturday 12 July 2025 20:11:17 +0000 (0:00:21.759) 0:01:58.953 ********* 2025-07-12 20:12:46.421512 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:46.421517 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:12:46.421521 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:12:46.421529 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:12:46.421534 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:46.421539 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:46.421544 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:46.421549 | orchestrator | 2025-07-12 20:12:46.421553 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-12 20:12:46.421558 | orchestrator | Saturday 12 July 2025 20:11:32 +0000 (0:00:14.802) 0:02:13.756 ********* 2025-07-12 20:12:46.421563 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:46.421568 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:46.421573 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:46.421577 | orchestrator | 2025-07-12 20:12:46.421582 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-12 20:12:46.421587 | orchestrator | Saturday 12 July 2025 20:11:44 +0000 (0:00:12.213) 0:02:25.970 ********* 2025-07-12 20:12:46.421592 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:46.421597 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:46.421601 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:46.421606 | orchestrator | 2025-07-12 20:12:46.421611 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-12 20:12:46.421616 | orchestrator | Saturday 12 July 2025 20:11:56 +0000 (0:00:11.523) 0:02:37.494 ********* 2025-07-12 20:12:46.421620 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:46.421628 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:12:46.421633 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:12:46.421637 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:46.421642 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:46.421647 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:12:46.421651 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:46.421656 | orchestrator | 2025-07-12 20:12:46.421661 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-12 20:12:46.421666 | orchestrator | Saturday 12 July 2025 20:12:13 +0000 (0:00:16.870) 0:02:54.364 ********* 2025-07-12 20:12:46.421671 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:46.421676 | orchestrator | 2025-07-12 20:12:46.421680 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-12 20:12:46.421685 | orchestrator | Saturday 12 July 2025 20:12:24 +0000 (0:00:11.594) 0:03:05.958 ********* 2025-07-12 20:12:46.421690 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:12:46.421695 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:12:46.421699 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:12:46.421704 | orchestrator | 2025-07-12 20:12:46.421709 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-12 20:12:46.421714 | orchestrator | Saturday 12 July 2025 20:12:33 +0000 (0:00:09.011) 0:03:14.969 ********* 2025-07-12 20:12:46.421719 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:46.421727 | orchestrator | 2025-07-12 20:12:46.421735 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-12 20:12:46.421743 | orchestrator | Saturday 12 July 2025 20:12:39 +0000 (0:00:05.464) 0:03:20.434 ********* 2025-07-12 20:12:46.421751 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:12:46.421759 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:12:46.421768 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:12:46.421777 | orchestrator | 2025-07-12 20:12:46.421787 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:12:46.421796 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:12:46.421810 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:12:46.421819 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:12:46.421827 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:12:46.421835 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:12:46.421844 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:12:46.421852 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:12:46.421859 | orchestrator | 2025-07-12 20:12:46.421864 | orchestrator | 2025-07-12 20:12:46.421869 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:12:46.421874 | orchestrator | Saturday 12 July 2025 20:12:45 +0000 (0:00:05.867) 0:03:26.302 ********* 2025-07-12 20:12:46.421879 | orchestrator | =============================================================================== 2025-07-12 20:12:46.421884 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 22.90s 2025-07-12 20:12:46.421889 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 21.76s 2025-07-12 20:12:46.421894 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.51s 2025-07-12 20:12:46.421899 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.87s 2025-07-12 20:12:46.421904 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.80s 2025-07-12 20:12:46.421909 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.21s 2025-07-12 20:12:46.421914 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 11.59s 2025-07-12 20:12:46.421923 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.52s 2025-07-12 20:12:46.421928 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.01s 2025-07-12 20:12:46.421933 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.08s 2025-07-12 20:12:46.421938 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.19s 2025-07-12 20:12:46.421943 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.87s 2025-07-12 20:12:46.421947 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.46s 2025-07-12 20:12:46.421952 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.92s 2025-07-12 20:12:46.421957 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.06s 2025-07-12 20:12:46.421962 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.96s 2025-07-12 20:12:46.421967 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.94s 2025-07-12 20:12:46.421972 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.58s 2025-07-12 20:12:46.421979 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.35s 2025-07-12 20:12:46.421984 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.25s 2025-07-12 20:12:46.421989 | orchestrator | 2025-07-12 20:12:46 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:46.421994 | orchestrator | 2025-07-12 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:49.459767 | orchestrator | 2025-07-12 20:12:49 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:49.461462 | orchestrator | 2025-07-12 20:12:49 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:49.462664 | orchestrator | 2025-07-12 20:12:49 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:49.464515 | orchestrator | 2025-07-12 20:12:49 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:12:49.464668 | orchestrator | 2025-07-12 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:52.507039 | orchestrator | 2025-07-12 20:12:52 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:52.507639 | orchestrator | 2025-07-12 20:12:52 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:52.509125 | orchestrator | 2025-07-12 20:12:52 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:52.510197 | orchestrator | 2025-07-12 20:12:52 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:12:52.510265 | orchestrator | 2025-07-12 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:55.542205 | orchestrator | 2025-07-12 20:12:55 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:55.543588 | orchestrator | 2025-07-12 20:12:55 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:55.545164 | orchestrator | 2025-07-12 20:12:55 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:55.546742 | orchestrator | 2025-07-12 20:12:55 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:12:55.546792 | orchestrator | 2025-07-12 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:12:58.582838 | orchestrator | 2025-07-12 20:12:58 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:12:58.584104 | orchestrator | 2025-07-12 20:12:58 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:12:58.585194 | orchestrator | 2025-07-12 20:12:58 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:12:58.586694 | orchestrator | 2025-07-12 20:12:58 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:12:58.586744 | orchestrator | 2025-07-12 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:01.625005 | orchestrator | 2025-07-12 20:13:01 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:01.625172 | orchestrator | 2025-07-12 20:13:01 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:01.625189 | orchestrator | 2025-07-12 20:13:01 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:01.629379 | orchestrator | 2025-07-12 20:13:01 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:01.629452 | orchestrator | 2025-07-12 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:04.677637 | orchestrator | 2025-07-12 20:13:04 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:04.679194 | orchestrator | 2025-07-12 20:13:04 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:04.680569 | orchestrator | 2025-07-12 20:13:04 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:04.682616 | orchestrator | 2025-07-12 20:13:04 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:04.682657 | orchestrator | 2025-07-12 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:07.721740 | orchestrator | 2025-07-12 20:13:07 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:07.723811 | orchestrator | 2025-07-12 20:13:07 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:07.726415 | orchestrator | 2025-07-12 20:13:07 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:07.728557 | orchestrator | 2025-07-12 20:13:07 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:07.728720 | orchestrator | 2025-07-12 20:13:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:10.773059 | orchestrator | 2025-07-12 20:13:10 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:10.775662 | orchestrator | 2025-07-12 20:13:10 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:10.777769 | orchestrator | 2025-07-12 20:13:10 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:10.779319 | orchestrator | 2025-07-12 20:13:10 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:10.779951 | orchestrator | 2025-07-12 20:13:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:13.839840 | orchestrator | 2025-07-12 20:13:13 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:13.840595 | orchestrator | 2025-07-12 20:13:13 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:13.843372 | orchestrator | 2025-07-12 20:13:13 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:13.845386 | orchestrator | 2025-07-12 20:13:13 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:13.845443 | orchestrator | 2025-07-12 20:13:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:16.893795 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:16.894346 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:16.895159 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:16.896200 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:16.896912 | orchestrator | 2025-07-12 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:19.933625 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:19.934768 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:19.935217 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:19.936256 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:19.936338 | orchestrator | 2025-07-12 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:22.973659 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:22.974401 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:22.975946 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:22.976557 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:22.976582 | orchestrator | 2025-07-12 20:13:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:26.027134 | orchestrator | 2025-07-12 20:13:26 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:26.031367 | orchestrator | 2025-07-12 20:13:26 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:26.035032 | orchestrator | 2025-07-12 20:13:26 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:26.037661 | orchestrator | 2025-07-12 20:13:26 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:26.037717 | orchestrator | 2025-07-12 20:13:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:29.083528 | orchestrator | 2025-07-12 20:13:29 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:29.084206 | orchestrator | 2025-07-12 20:13:29 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:29.085098 | orchestrator | 2025-07-12 20:13:29 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:29.086248 | orchestrator | 2025-07-12 20:13:29 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:29.086333 | orchestrator | 2025-07-12 20:13:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:32.123306 | orchestrator | 2025-07-12 20:13:32 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:32.123954 | orchestrator | 2025-07-12 20:13:32 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:32.124908 | orchestrator | 2025-07-12 20:13:32 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:32.125614 | orchestrator | 2025-07-12 20:13:32 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:32.125631 | orchestrator | 2025-07-12 20:13:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:35.155410 | orchestrator | 2025-07-12 20:13:35 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:35.155536 | orchestrator | 2025-07-12 20:13:35 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:35.156278 | orchestrator | 2025-07-12 20:13:35 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:35.156822 | orchestrator | 2025-07-12 20:13:35 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:35.156856 | orchestrator | 2025-07-12 20:13:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:38.200774 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:38.200845 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:38.202202 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:38.202794 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:38.202824 | orchestrator | 2025-07-12 20:13:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:41.246422 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:41.246823 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:41.247593 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:41.248358 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:41.248390 | orchestrator | 2025-07-12 20:13:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:44.270655 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:44.271150 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:44.271976 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:44.272423 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:44.272458 | orchestrator | 2025-07-12 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:47.309651 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state STARTED 2025-07-12 20:13:47.309926 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:47.310694 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:47.311349 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:47.311364 | orchestrator | 2025-07-12 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:50.335805 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task e2ba4e61-f3f1-4f54-9f2a-3b4d2296023a is in state SUCCESS 2025-07-12 20:13:50.336601 | orchestrator | 2025-07-12 20:13:50.336626 | orchestrator | 2025-07-12 20:13:50.336634 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:13:50.336642 | orchestrator | 2025-07-12 20:13:50.336649 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:13:50.336656 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.271) 0:00:00.271 ********* 2025-07-12 20:13:50.336664 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:13:50.336682 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:13:50.336689 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:13:50.336696 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:13:50.336702 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:13:50.336709 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:13:50.336716 | orchestrator | 2025-07-12 20:13:50.336723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:13:50.336730 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.710) 0:00:00.981 ********* 2025-07-12 20:13:50.336738 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-12 20:13:50.336745 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-12 20:13:50.336752 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-12 20:13:50.336761 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-12 20:13:50.336768 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-12 20:13:50.336774 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-12 20:13:50.336780 | orchestrator | 2025-07-12 20:13:50.336786 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-12 20:13:50.336792 | orchestrator | 2025-07-12 20:13:50.336811 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:13:50.336818 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:00.639) 0:00:01.621 ********* 2025-07-12 20:13:50.336824 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:13:50.336831 | orchestrator | 2025-07-12 20:13:50.336836 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-12 20:13:50.336842 | orchestrator | Saturday 12 July 2025 20:09:44 +0000 (0:00:01.166) 0:00:02.787 ********* 2025-07-12 20:13:50.336849 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-12 20:13:50.336854 | orchestrator | 2025-07-12 20:13:50.336860 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-12 20:13:50.336866 | orchestrator | Saturday 12 July 2025 20:09:48 +0000 (0:00:03.674) 0:00:06.461 ********* 2025-07-12 20:13:50.336872 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-12 20:13:50.336879 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-12 20:13:50.336885 | orchestrator | 2025-07-12 20:13:50.336916 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-12 20:13:50.336924 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:06.978) 0:00:13.439 ********* 2025-07-12 20:13:50.336931 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:13:50.336938 | orchestrator | 2025-07-12 20:13:50.336962 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-12 20:13:50.336971 | orchestrator | Saturday 12 July 2025 20:09:58 +0000 (0:00:03.339) 0:00:16.779 ********* 2025-07-12 20:13:50.336978 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:13:50.336985 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-12 20:13:50.336992 | orchestrator | 2025-07-12 20:13:50.336998 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-12 20:13:50.337005 | orchestrator | Saturday 12 July 2025 20:10:02 +0000 (0:00:04.148) 0:00:20.928 ********* 2025-07-12 20:13:50.337012 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:13:50.337020 | orchestrator | 2025-07-12 20:13:50.337026 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-12 20:13:50.337033 | orchestrator | Saturday 12 July 2025 20:10:06 +0000 (0:00:03.583) 0:00:24.512 ********* 2025-07-12 20:13:50.337039 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-12 20:13:50.337046 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-12 20:13:50.337052 | orchestrator | 2025-07-12 20:13:50.337058 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-12 20:13:50.337104 | orchestrator | Saturday 12 July 2025 20:10:14 +0000 (0:00:07.942) 0:00:32.454 ********* 2025-07-12 20:13:50.337117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.337144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.337159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.337167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337208 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.337314 | orchestrator | 2025-07-12 20:13:50.337343 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:13:50.337351 | orchestrator | Saturday 12 July 2025 20:10:16 +0000 (0:00:01.872) 0:00:34.326 ********* 2025-07-12 20:13:50.337358 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.337364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.337371 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.337377 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.337387 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.337394 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.337401 | orchestrator | 2025-07-12 20:13:50.337407 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:13:50.337414 | orchestrator | Saturday 12 July 2025 20:10:17 +0000 (0:00:00.819) 0:00:35.146 ********* 2025-07-12 20:13:50.337421 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.337427 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.337433 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.337440 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:13:50.337447 | orchestrator | 2025-07-12 20:13:50.337454 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-12 20:13:50.337461 | orchestrator | Saturday 12 July 2025 20:10:18 +0000 (0:00:01.058) 0:00:36.204 ********* 2025-07-12 20:13:50.337468 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-12 20:13:50.337476 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-12 20:13:50.337483 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-12 20:13:50.337489 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-12 20:13:50.337496 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-12 20:13:50.337502 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-12 20:13:50.337509 | orchestrator | 2025-07-12 20:13:50.337515 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-12 20:13:50.337523 | orchestrator | Saturday 12 July 2025 20:10:20 +0000 (0:00:02.045) 0:00:38.250 ********* 2025-07-12 20:13:50.337530 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:13:50.337539 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:13:50.337551 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:13:50.337568 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:13:50.337574 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:13:50.337581 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:13:50.337589 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:13:50.337600 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:13:50.337614 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:13:50.337622 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:13:50.337630 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:13:50.337637 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:13:50.337658 | orchestrator | 2025-07-12 20:13:50.337672 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-12 20:13:50.337678 | orchestrator | Saturday 12 July 2025 20:10:24 +0000 (0:00:03.879) 0:00:42.129 ********* 2025-07-12 20:13:50.337684 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:13:50.337691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:13:50.337697 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:13:50.337703 | orchestrator | 2025-07-12 20:13:50.337709 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-12 20:13:50.337715 | orchestrator | Saturday 12 July 2025 20:10:26 +0000 (0:00:01.962) 0:00:44.092 ********* 2025-07-12 20:13:50.337721 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-12 20:13:50.337727 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-12 20:13:50.337733 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-12 20:13:50.337739 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:13:50.337744 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:13:50.337753 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:13:50.337760 | orchestrator | 2025-07-12 20:13:50.337765 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-12 20:13:50.337771 | orchestrator | Saturday 12 July 2025 20:10:29 +0000 (0:00:03.127) 0:00:47.219 ********* 2025-07-12 20:13:50.337776 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-12 20:13:50.337790 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-12 20:13:50.337796 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-12 20:13:50.337802 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-12 20:13:50.337808 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-12 20:13:50.337814 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-12 20:13:50.337821 | orchestrator | 2025-07-12 20:13:50.337828 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-12 20:13:50.337835 | orchestrator | Saturday 12 July 2025 20:10:30 +0000 (0:00:01.029) 0:00:48.249 ********* 2025-07-12 20:13:50.337841 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.337847 | orchestrator | 2025-07-12 20:13:50.337854 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-12 20:13:50.337860 | orchestrator | Saturday 12 July 2025 20:10:30 +0000 (0:00:00.244) 0:00:48.493 ********* 2025-07-12 20:13:50.337867 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.337873 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.337880 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.337887 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.337894 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.337900 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.337907 | orchestrator | 2025-07-12 20:13:50.337935 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:13:50.337942 | orchestrator | Saturday 12 July 2025 20:10:32 +0000 (0:00:01.785) 0:00:50.278 ********* 2025-07-12 20:13:50.337958 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:13:50.337966 | orchestrator | 2025-07-12 20:13:50.337978 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-12 20:13:50.337986 | orchestrator | Saturday 12 July 2025 20:10:34 +0000 (0:00:01.838) 0:00:52.117 ********* 2025-07-12 20:13:50.338013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.338055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.338078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.338089 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338096 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338755 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338788 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.338797 | orchestrator | 2025-07-12 20:13:50.338806 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-12 20:13:50.338814 | orchestrator | Saturday 12 July 2025 20:10:37 +0000 (0:00:03.706) 0:00:55.823 ********* 2025-07-12 20:13:50.338823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.338842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338851 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.338863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.338877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.338893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.338908 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.338916 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338940 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.338947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338959 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338967 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.338975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.338990 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.338998 | orchestrator | 2025-07-12 20:13:50.339005 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-12 20:13:50.339012 | orchestrator | Saturday 12 July 2025 20:10:39 +0000 (0:00:02.080) 0:00:57.904 ********* 2025-07-12 20:13:50.339028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.339040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339048 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.339056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.339063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.339123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339135 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.339143 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.339154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339170 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.339178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339193 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.339204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339227 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.339235 | orchestrator | 2025-07-12 20:13:50.339242 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-12 20:13:50.339250 | orchestrator | Saturday 12 July 2025 20:10:42 +0000 (0:00:02.676) 0:01:00.580 ********* 2025-07-12 20:13:50.339258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.339268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.339277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.339299 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339319 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339420 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339434 | orchestrator | 2025-07-12 20:13:50.339447 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-12 20:13:50.339460 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:03.364) 0:01:03.946 ********* 2025-07-12 20:13:50.339469 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 20:13:50.339478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.339486 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 20:13:50.339495 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.339503 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 20:13:50.339511 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.339519 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 20:13:50.339528 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 20:13:50.339536 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 20:13:50.339545 | orchestrator | 2025-07-12 20:13:50.339553 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-12 20:13:50.339562 | orchestrator | Saturday 12 July 2025 20:10:48 +0000 (0:00:02.490) 0:01:06.437 ********* 2025-07-12 20:13:50.339571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.339594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.339604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.339614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339641 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339700 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.339708 | orchestrator | 2025-07-12 20:13:50.339716 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-12 20:13:50.339725 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:10.528) 0:01:16.965 ********* 2025-07-12 20:13:50.339739 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.339749 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.339759 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.339767 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:13:50.339776 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:13:50.339785 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:13:50.339793 | orchestrator | 2025-07-12 20:13:50.339802 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-12 20:13:50.339815 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:02.263) 0:01:19.229 ********* 2025-07-12 20:13:50.339824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.339834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.339857 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.339867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:13:50.339894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.339913 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.339922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339945 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.339954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339964 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.339973 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.339990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.340001 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:13:50.340010 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.340019 | orchestrator | 2025-07-12 20:13:50.340028 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-12 20:13:50.340037 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:01.791) 0:01:21.020 ********* 2025-07-12 20:13:50.340045 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.340054 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.340086 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.340096 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.340105 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.340114 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.340123 | orchestrator | 2025-07-12 20:13:50.340132 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-12 20:13:50.340141 | orchestrator | Saturday 12 July 2025 20:11:04 +0000 (0:00:01.122) 0:01:22.143 ********* 2025-07-12 20:13:50.340151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.340160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.340180 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340190 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:13:50.340224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340281 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:13:50.340300 | orchestrator | 2025-07-12 20:13:50.340310 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:13:50.340319 | orchestrator | Saturday 12 July 2025 20:11:06 +0000 (0:00:02.507) 0:01:24.650 ********* 2025-07-12 20:13:50.340328 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.340343 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:13:50.340357 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:13:50.340372 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:13:50.340386 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:13:50.340400 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:13:50.340417 | orchestrator | 2025-07-12 20:13:50.340434 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-12 20:13:50.340450 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:00.507) 0:01:25.158 ********* 2025-07-12 20:13:50.340459 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:13:50.340468 | orchestrator | 2025-07-12 20:13:50.340478 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-12 20:13:50.340486 | orchestrator | Saturday 12 July 2025 20:11:09 +0000 (0:00:02.193) 0:01:27.352 ********* 2025-07-12 20:13:50.340495 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:13:50.340504 | orchestrator | 2025-07-12 20:13:50.340516 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-12 20:13:50.340526 | orchestrator | Saturday 12 July 2025 20:11:11 +0000 (0:00:02.282) 0:01:29.634 ********* 2025-07-12 20:13:50.340534 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:13:50.340543 | orchestrator | 2025-07-12 20:13:50.340552 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:13:50.340560 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:19.731) 0:01:49.366 ********* 2025-07-12 20:13:50.340569 | orchestrator | 2025-07-12 20:13:50.340585 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:13:50.340594 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.066) 0:01:49.432 ********* 2025-07-12 20:13:50.340603 | orchestrator | 2025-07-12 20:13:50.340611 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:13:50.340625 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.068) 0:01:49.501 ********* 2025-07-12 20:13:50.340642 | orchestrator | 2025-07-12 20:13:50.340651 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:13:50.340661 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.067) 0:01:49.568 ********* 2025-07-12 20:13:50.340669 | orchestrator | 2025-07-12 20:13:50.340678 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:13:50.340687 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.065) 0:01:49.634 ********* 2025-07-12 20:13:50.340696 | orchestrator | 2025-07-12 20:13:50.340705 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:13:50.340713 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.067) 0:01:49.702 ********* 2025-07-12 20:13:50.340722 | orchestrator | 2025-07-12 20:13:50.340731 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-12 20:13:50.340739 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.064) 0:01:49.766 ********* 2025-07-12 20:13:50.340748 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:13:50.340758 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:13:50.340766 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:13:50.340775 | orchestrator | 2025-07-12 20:13:50.340784 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-12 20:13:50.340793 | orchestrator | Saturday 12 July 2025 20:11:54 +0000 (0:00:23.128) 0:02:12.895 ********* 2025-07-12 20:13:50.340802 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:13:50.340810 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:13:50.340819 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:13:50.340828 | orchestrator | 2025-07-12 20:13:50.340838 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-12 20:13:50.340846 | orchestrator | Saturday 12 July 2025 20:12:08 +0000 (0:00:13.388) 0:02:26.283 ********* 2025-07-12 20:13:50.340855 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:13:50.340864 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:13:50.340872 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:13:50.340881 | orchestrator | 2025-07-12 20:13:50.340890 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-12 20:13:50.340899 | orchestrator | Saturday 12 July 2025 20:13:38 +0000 (0:01:30.228) 0:03:56.512 ********* 2025-07-12 20:13:50.340907 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:13:50.340916 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:13:50.340925 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:13:50.340933 | orchestrator | 2025-07-12 20:13:50.340942 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-12 20:13:50.340951 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:09.389) 0:04:05.901 ********* 2025-07-12 20:13:50.340960 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:13:50.340968 | orchestrator | 2025-07-12 20:13:50.340977 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:13:50.340986 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:13:50.340996 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:13:50.341004 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:13:50.341014 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:13:50.341023 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:13:50.341032 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:13:50.341046 | orchestrator | 2025-07-12 20:13:50.341055 | orchestrator | 2025-07-12 20:13:50.341114 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:13:50.341129 | orchestrator | Saturday 12 July 2025 20:13:49 +0000 (0:00:01.222) 0:04:07.124 ********* 2025-07-12 20:13:50.341138 | orchestrator | =============================================================================== 2025-07-12 20:13:50.341146 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 90.23s 2025-07-12 20:13:50.341155 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 23.13s 2025-07-12 20:13:50.341164 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.73s 2025-07-12 20:13:50.341173 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 13.39s 2025-07-12 20:13:50.341181 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.53s 2025-07-12 20:13:50.341190 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.39s 2025-07-12 20:13:50.341199 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.94s 2025-07-12 20:13:50.341208 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.98s 2025-07-12 20:13:50.341271 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.15s 2025-07-12 20:13:50.341282 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.88s 2025-07-12 20:13:50.341291 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.71s 2025-07-12 20:13:50.341300 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.67s 2025-07-12 20:13:50.341314 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.58s 2025-07-12 20:13:50.341323 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.37s 2025-07-12 20:13:50.341333 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.34s 2025-07-12 20:13:50.341348 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.13s 2025-07-12 20:13:50.341363 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS key ------ 2.68s 2025-07-12 20:13:50.341379 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.51s 2025-07-12 20:13:50.341393 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.49s 2025-07-12 20:13:50.341408 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.28s 2025-07-12 20:13:50.341422 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:50.341437 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:50.341453 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:50.341469 | orchestrator | 2025-07-12 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:53.361837 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:53.363208 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:53.363251 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:13:53.363784 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:53.363816 | orchestrator | 2025-07-12 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:56.391653 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:56.392943 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:56.393603 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:13:56.394354 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:56.394397 | orchestrator | 2025-07-12 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:59.421481 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:13:59.421969 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:13:59.422199 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:13:59.422853 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:13:59.423240 | orchestrator | 2025-07-12 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:02.458360 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:02.458457 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:02.458815 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:02.459480 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:02.460465 | orchestrator | 2025-07-12 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:05.485167 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:05.485574 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:05.486389 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:05.486867 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:05.486897 | orchestrator | 2025-07-12 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:08.532204 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:08.532331 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:08.532727 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:08.532999 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:08.533090 | orchestrator | 2025-07-12 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:11.556699 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:11.557259 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:11.560873 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:11.562420 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:11.562479 | orchestrator | 2025-07-12 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:14.591445 | orchestrator | 2025-07-12 20:14:14 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:14.592115 | orchestrator | 2025-07-12 20:14:14 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:14.595519 | orchestrator | 2025-07-12 20:14:14 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:14.596325 | orchestrator | 2025-07-12 20:14:14 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:14.596365 | orchestrator | 2025-07-12 20:14:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:17.638471 | orchestrator | 2025-07-12 20:14:17 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:17.640239 | orchestrator | 2025-07-12 20:14:17 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:17.640859 | orchestrator | 2025-07-12 20:14:17 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:17.641457 | orchestrator | 2025-07-12 20:14:17 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:17.641500 | orchestrator | 2025-07-12 20:14:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:20.667551 | orchestrator | 2025-07-12 20:14:20 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:20.667686 | orchestrator | 2025-07-12 20:14:20 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:20.668000 | orchestrator | 2025-07-12 20:14:20 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:20.668881 | orchestrator | 2025-07-12 20:14:20 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:20.669152 | orchestrator | 2025-07-12 20:14:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:23.694457 | orchestrator | 2025-07-12 20:14:23 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:23.695877 | orchestrator | 2025-07-12 20:14:23 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:23.697304 | orchestrator | 2025-07-12 20:14:23 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:23.698257 | orchestrator | 2025-07-12 20:14:23 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:23.698310 | orchestrator | 2025-07-12 20:14:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:26.725137 | orchestrator | 2025-07-12 20:14:26 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:26.725256 | orchestrator | 2025-07-12 20:14:26 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:26.725314 | orchestrator | 2025-07-12 20:14:26 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:26.726100 | orchestrator | 2025-07-12 20:14:26 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:26.728478 | orchestrator | 2025-07-12 20:14:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:29.755233 | orchestrator | 2025-07-12 20:14:29 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:29.755355 | orchestrator | 2025-07-12 20:14:29 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:29.755696 | orchestrator | 2025-07-12 20:14:29 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:29.756245 | orchestrator | 2025-07-12 20:14:29 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:29.756276 | orchestrator | 2025-07-12 20:14:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:32.776661 | orchestrator | 2025-07-12 20:14:32 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:32.776770 | orchestrator | 2025-07-12 20:14:32 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:32.777263 | orchestrator | 2025-07-12 20:14:32 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:32.778004 | orchestrator | 2025-07-12 20:14:32 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:32.778112 | orchestrator | 2025-07-12 20:14:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:35.803463 | orchestrator | 2025-07-12 20:14:35 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:35.803564 | orchestrator | 2025-07-12 20:14:35 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:35.803822 | orchestrator | 2025-07-12 20:14:35 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:35.804342 | orchestrator | 2025-07-12 20:14:35 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:35.804364 | orchestrator | 2025-07-12 20:14:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:38.827574 | orchestrator | 2025-07-12 20:14:38 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:38.828505 | orchestrator | 2025-07-12 20:14:38 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:38.829192 | orchestrator | 2025-07-12 20:14:38 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:38.829833 | orchestrator | 2025-07-12 20:14:38 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:38.829857 | orchestrator | 2025-07-12 20:14:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:41.860453 | orchestrator | 2025-07-12 20:14:41 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:41.861874 | orchestrator | 2025-07-12 20:14:41 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:41.861948 | orchestrator | 2025-07-12 20:14:41 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:41.862296 | orchestrator | 2025-07-12 20:14:41 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:41.862327 | orchestrator | 2025-07-12 20:14:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:44.890941 | orchestrator | 2025-07-12 20:14:44 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:44.891295 | orchestrator | 2025-07-12 20:14:44 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:44.891969 | orchestrator | 2025-07-12 20:14:44 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:44.895044 | orchestrator | 2025-07-12 20:14:44 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:44.895125 | orchestrator | 2025-07-12 20:14:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:47.924412 | orchestrator | 2025-07-12 20:14:47 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:47.925295 | orchestrator | 2025-07-12 20:14:47 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:47.927509 | orchestrator | 2025-07-12 20:14:47 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:47.928242 | orchestrator | 2025-07-12 20:14:47 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:47.928284 | orchestrator | 2025-07-12 20:14:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:50.961818 | orchestrator | 2025-07-12 20:14:50 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:50.962001 | orchestrator | 2025-07-12 20:14:50 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:50.962614 | orchestrator | 2025-07-12 20:14:50 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:50.963435 | orchestrator | 2025-07-12 20:14:50 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state STARTED 2025-07-12 20:14:50.963505 | orchestrator | 2025-07-12 20:14:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:54.012151 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:54.012707 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:54.015078 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:54.017266 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task 5b531132-e9f0-4d0a-9d82-164e18a0c121 is in state SUCCESS 2025-07-12 20:14:54.017296 | orchestrator | 2025-07-12 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:54.018470 | orchestrator | 2025-07-12 20:14:54.018503 | orchestrator | 2025-07-12 20:14:54.018511 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:14:54.018518 | orchestrator | 2025-07-12 20:14:54.018525 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:14:54.018531 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-07-12 20:14:54.018538 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:14:54.018545 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:14:54.018552 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:14:54.018559 | orchestrator | 2025-07-12 20:14:54.018565 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:14:54.018572 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:00.273) 0:00:00.536 ********* 2025-07-12 20:14:54.018578 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-12 20:14:54.018585 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-12 20:14:54.018591 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-12 20:14:54.018598 | orchestrator | 2025-07-12 20:14:54.018604 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-12 20:14:54.018609 | orchestrator | 2025-07-12 20:14:54.018616 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 20:14:54.018622 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:00.361) 0:00:00.897 ********* 2025-07-12 20:14:54.018628 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:14:54.018636 | orchestrator | 2025-07-12 20:14:54.018642 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-12 20:14:54.018649 | orchestrator | Saturday 12 July 2025 20:12:50 +0000 (0:00:00.479) 0:00:01.377 ********* 2025-07-12 20:14:54.018656 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-12 20:14:54.018663 | orchestrator | 2025-07-12 20:14:54.018670 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-12 20:14:54.018694 | orchestrator | Saturday 12 July 2025 20:12:53 +0000 (0:00:03.242) 0:00:04.619 ********* 2025-07-12 20:14:54.018701 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-12 20:14:54.018708 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-12 20:14:54.018715 | orchestrator | 2025-07-12 20:14:54.018722 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-12 20:14:54.018729 | orchestrator | Saturday 12 July 2025 20:13:00 +0000 (0:00:06.449) 0:00:11.069 ********* 2025-07-12 20:14:54.018736 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:14:54.018742 | orchestrator | 2025-07-12 20:14:54.018749 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-12 20:14:54.018756 | orchestrator | Saturday 12 July 2025 20:13:03 +0000 (0:00:03.174) 0:00:14.243 ********* 2025-07-12 20:14:54.018762 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:14:54.018770 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-12 20:14:54.018776 | orchestrator | 2025-07-12 20:14:54.018782 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-12 20:14:54.018790 | orchestrator | Saturday 12 July 2025 20:13:07 +0000 (0:00:03.943) 0:00:18.187 ********* 2025-07-12 20:14:54.018797 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:14:54.018826 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-12 20:14:54.018835 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-12 20:14:54.018842 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-12 20:14:54.018849 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-12 20:14:54.018855 | orchestrator | 2025-07-12 20:14:54.018862 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-12 20:14:54.018869 | orchestrator | Saturday 12 July 2025 20:13:23 +0000 (0:00:16.225) 0:00:34.412 ********* 2025-07-12 20:14:54.018876 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-12 20:14:54.018882 | orchestrator | 2025-07-12 20:14:54.018889 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-12 20:14:54.018895 | orchestrator | Saturday 12 July 2025 20:13:28 +0000 (0:00:04.919) 0:00:39.332 ********* 2025-07-12 20:14:54.018913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.018930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.018942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.018951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.018960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.018970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.018981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.018989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019008 | orchestrator | 2025-07-12 20:14:54.019015 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-12 20:14:54.019098 | orchestrator | Saturday 12 July 2025 20:13:31 +0000 (0:00:02.737) 0:00:42.070 ********* 2025-07-12 20:14:54.019110 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-12 20:14:54.019118 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-12 20:14:54.019125 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-12 20:14:54.019138 | orchestrator | 2025-07-12 20:14:54.019150 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-12 20:14:54.019161 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:01.375) 0:00:43.445 ********* 2025-07-12 20:14:54.019173 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.019186 | orchestrator | 2025-07-12 20:14:54.019199 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-12 20:14:54.019211 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:00.116) 0:00:43.562 ********* 2025-07-12 20:14:54.019222 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.019234 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:14:54.019246 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:14:54.019258 | orchestrator | 2025-07-12 20:14:54.019270 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 20:14:54.019279 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:00.463) 0:00:44.026 ********* 2025-07-12 20:14:54.019287 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:14:54.019293 | orchestrator | 2025-07-12 20:14:54.019300 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-12 20:14:54.019306 | orchestrator | Saturday 12 July 2025 20:13:33 +0000 (0:00:00.933) 0:00:44.959 ********* 2025-07-12 20:14:54.019317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.019330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.019342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.019349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019407 | orchestrator | 2025-07-12 20:14:54.019413 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-12 20:14:54.019420 | orchestrator | Saturday 12 July 2025 20:13:39 +0000 (0:00:05.341) 0:00:50.300 ********* 2025-07-12 20:14:54.019427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.019435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019451 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.019465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.019472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019485 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:14:54.019492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.019499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019524 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:14:54.019530 | orchestrator | 2025-07-12 20:14:54.019537 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-12 20:14:54.019543 | orchestrator | Saturday 12 July 2025 20:13:41 +0000 (0:00:02.438) 0:00:52.739 ********* 2025-07-12 20:14:54.019553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.019560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019573 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.019580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.019592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019606 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:14:54.019616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.019623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.019636 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:14:54.019642 | orchestrator | 2025-07-12 20:14:54.019649 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-12 20:14:54.019655 | orchestrator | Saturday 12 July 2025 20:13:43 +0000 (0:00:01.634) 0:00:54.374 ********* 2025-07-12 20:14:54.019664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.019796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.019806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.019813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.019867 | orchestrator | 2025-07-12 20:14:54.019874 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-12 20:14:54.019881 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:04.211) 0:00:58.586 ********* 2025-07-12 20:14:54.019888 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.019895 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:54.019902 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:54.019908 | orchestrator | 2025-07-12 20:14:54.019915 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-12 20:14:54.019922 | orchestrator | Saturday 12 July 2025 20:13:50 +0000 (0:00:02.691) 0:01:01.277 ********* 2025-07-12 20:14:54.019929 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:14:54.019936 | orchestrator | 2025-07-12 20:14:54.019942 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-12 20:14:54.019949 | orchestrator | Saturday 12 July 2025 20:13:52 +0000 (0:00:02.428) 0:01:03.706 ********* 2025-07-12 20:14:54.019956 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.019974 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:14:54.019980 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:14:54.019987 | orchestrator | 2025-07-12 20:14:54.019993 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-12 20:14:54.020000 | orchestrator | Saturday 12 July 2025 20:13:54 +0000 (0:00:01.414) 0:01:05.120 ********* 2025-07-12 20:14:54.020006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.020021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.020033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.020040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020113 | orchestrator | 2025-07-12 20:14:54.020120 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-12 20:14:54.020127 | orchestrator | Saturday 12 July 2025 20:14:04 +0000 (0:00:10.695) 0:01:15.816 ********* 2025-07-12 20:14:54.020137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.020145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.020156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.020164 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:14:54.020171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.020180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.020190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.020197 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.020204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:14:54.020217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.020224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:14:54.020231 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:14:54.020238 | orchestrator | 2025-07-12 20:14:54.020244 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-12 20:14:54.020251 | orchestrator | Saturday 12 July 2025 20:14:06 +0000 (0:00:01.735) 0:01:17.551 ********* 2025-07-12 20:14:54.020264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.020275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.020282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:14:54.020293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:14:54.020343 | orchestrator | 2025-07-12 20:14:54.020350 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 20:14:54.020358 | orchestrator | Saturday 12 July 2025 20:14:09 +0000 (0:00:03.073) 0:01:20.624 ********* 2025-07-12 20:14:54.020364 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:14:54.020372 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:14:54.020378 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:14:54.020385 | orchestrator | 2025-07-12 20:14:54.020392 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-12 20:14:54.020399 | orchestrator | Saturday 12 July 2025 20:14:10 +0000 (0:00:00.485) 0:01:21.109 ********* 2025-07-12 20:14:54.020405 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.020412 | orchestrator | 2025-07-12 20:14:54.020418 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-12 20:14:54.020424 | orchestrator | Saturday 12 July 2025 20:14:12 +0000 (0:00:02.379) 0:01:23.490 ********* 2025-07-12 20:14:54.020431 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.020438 | orchestrator | 2025-07-12 20:14:54.020445 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-12 20:14:54.020452 | orchestrator | Saturday 12 July 2025 20:14:14 +0000 (0:00:02.346) 0:01:25.836 ********* 2025-07-12 20:14:54.020458 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.020466 | orchestrator | 2025-07-12 20:14:54.020472 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 20:14:54.020479 | orchestrator | Saturday 12 July 2025 20:14:26 +0000 (0:00:11.734) 0:01:37.571 ********* 2025-07-12 20:14:54.020486 | orchestrator | 2025-07-12 20:14:54.020493 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 20:14:54.020500 | orchestrator | Saturday 12 July 2025 20:14:26 +0000 (0:00:00.193) 0:01:37.764 ********* 2025-07-12 20:14:54.020506 | orchestrator | 2025-07-12 20:14:54.020513 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 20:14:54.020520 | orchestrator | Saturday 12 July 2025 20:14:26 +0000 (0:00:00.205) 0:01:37.969 ********* 2025-07-12 20:14:54.020527 | orchestrator | 2025-07-12 20:14:54.020533 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-12 20:14:54.020540 | orchestrator | Saturday 12 July 2025 20:14:27 +0000 (0:00:00.249) 0:01:38.218 ********* 2025-07-12 20:14:54.020547 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:54.020554 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.020560 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:54.020567 | orchestrator | 2025-07-12 20:14:54.020574 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-12 20:14:54.020581 | orchestrator | Saturday 12 July 2025 20:14:37 +0000 (0:00:10.005) 0:01:48.224 ********* 2025-07-12 20:14:54.020587 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.020595 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:54.020602 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:54.020609 | orchestrator | 2025-07-12 20:14:54.020616 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-12 20:14:54.020624 | orchestrator | Saturday 12 July 2025 20:14:44 +0000 (0:00:07.421) 0:01:55.646 ********* 2025-07-12 20:14:54.020631 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:54.020638 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:54.020645 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:54.020652 | orchestrator | 2025-07-12 20:14:54.020663 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:14:54.020672 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:14:54.020685 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:14:54.020693 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:14:54.020700 | orchestrator | 2025-07-12 20:14:54.020707 | orchestrator | 2025-07-12 20:14:54.020715 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:14:54.020722 | orchestrator | Saturday 12 July 2025 20:14:52 +0000 (0:00:07.583) 0:02:03.229 ********* 2025-07-12 20:14:54.020730 | orchestrator | =============================================================================== 2025-07-12 20:14:54.020738 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.23s 2025-07-12 20:14:54.020748 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.74s 2025-07-12 20:14:54.020755 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.70s 2025-07-12 20:14:54.020762 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.01s 2025-07-12 20:14:54.020769 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.58s 2025-07-12 20:14:54.020776 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 7.42s 2025-07-12 20:14:54.020783 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.45s 2025-07-12 20:14:54.020790 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 5.34s 2025-07-12 20:14:54.020797 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.92s 2025-07-12 20:14:54.020804 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.21s 2025-07-12 20:14:54.020810 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.94s 2025-07-12 20:14:54.020817 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.24s 2025-07-12 20:14:54.020824 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.17s 2025-07-12 20:14:54.020831 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.07s 2025-07-12 20:14:54.020838 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.74s 2025-07-12 20:14:54.020845 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.69s 2025-07-12 20:14:54.020852 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.44s 2025-07-12 20:14:54.020859 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.43s 2025-07-12 20:14:54.020866 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.38s 2025-07-12 20:14:54.020873 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.35s 2025-07-12 20:14:57.043423 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:14:57.043700 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:14:57.044781 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:14:57.045704 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:14:57.045729 | orchestrator | 2025-07-12 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:00.068267 | orchestrator | 2025-07-12 20:15:00 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:00.071204 | orchestrator | 2025-07-12 20:15:00 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:00.071225 | orchestrator | 2025-07-12 20:15:00 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:00.071247 | orchestrator | 2025-07-12 20:15:00 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:00.071254 | orchestrator | 2025-07-12 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:03.095374 | orchestrator | 2025-07-12 20:15:03 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:03.095592 | orchestrator | 2025-07-12 20:15:03 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:03.096527 | orchestrator | 2025-07-12 20:15:03 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:03.097025 | orchestrator | 2025-07-12 20:15:03 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:03.097172 | orchestrator | 2025-07-12 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:06.136984 | orchestrator | 2025-07-12 20:15:06 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:06.140228 | orchestrator | 2025-07-12 20:15:06 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:06.143536 | orchestrator | 2025-07-12 20:15:06 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:06.145409 | orchestrator | 2025-07-12 20:15:06 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:06.145470 | orchestrator | 2025-07-12 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:09.171698 | orchestrator | 2025-07-12 20:15:09 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:09.171817 | orchestrator | 2025-07-12 20:15:09 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:09.172417 | orchestrator | 2025-07-12 20:15:09 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:09.174520 | orchestrator | 2025-07-12 20:15:09 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:09.174580 | orchestrator | 2025-07-12 20:15:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:12.200312 | orchestrator | 2025-07-12 20:15:12 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:12.200799 | orchestrator | 2025-07-12 20:15:12 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:12.201213 | orchestrator | 2025-07-12 20:15:12 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:12.202663 | orchestrator | 2025-07-12 20:15:12 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:12.202702 | orchestrator | 2025-07-12 20:15:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:15.245898 | orchestrator | 2025-07-12 20:15:15 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:15.246186 | orchestrator | 2025-07-12 20:15:15 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:15.246890 | orchestrator | 2025-07-12 20:15:15 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:15.249032 | orchestrator | 2025-07-12 20:15:15 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:15.249094 | orchestrator | 2025-07-12 20:15:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:18.293360 | orchestrator | 2025-07-12 20:15:18 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:18.295630 | orchestrator | 2025-07-12 20:15:18 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:18.297155 | orchestrator | 2025-07-12 20:15:18 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:18.299318 | orchestrator | 2025-07-12 20:15:18 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:18.299367 | orchestrator | 2025-07-12 20:15:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:21.347275 | orchestrator | 2025-07-12 20:15:21 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:21.348860 | orchestrator | 2025-07-12 20:15:21 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:21.350893 | orchestrator | 2025-07-12 20:15:21 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:21.352507 | orchestrator | 2025-07-12 20:15:21 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:21.352598 | orchestrator | 2025-07-12 20:15:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:24.411856 | orchestrator | 2025-07-12 20:15:24 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:24.412190 | orchestrator | 2025-07-12 20:15:24 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:24.412888 | orchestrator | 2025-07-12 20:15:24 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:24.413888 | orchestrator | 2025-07-12 20:15:24 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:24.413926 | orchestrator | 2025-07-12 20:15:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:27.447914 | orchestrator | 2025-07-12 20:15:27 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:27.450545 | orchestrator | 2025-07-12 20:15:27 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:27.450597 | orchestrator | 2025-07-12 20:15:27 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:27.451323 | orchestrator | 2025-07-12 20:15:27 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:27.451410 | orchestrator | 2025-07-12 20:15:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:30.493338 | orchestrator | 2025-07-12 20:15:30 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:30.495151 | orchestrator | 2025-07-12 20:15:30 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:30.503058 | orchestrator | 2025-07-12 20:15:30 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:30.514456 | orchestrator | 2025-07-12 20:15:30 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:30.514539 | orchestrator | 2025-07-12 20:15:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:33.561600 | orchestrator | 2025-07-12 20:15:33 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:33.561729 | orchestrator | 2025-07-12 20:15:33 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:33.563710 | orchestrator | 2025-07-12 20:15:33 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:33.563755 | orchestrator | 2025-07-12 20:15:33 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:33.563767 | orchestrator | 2025-07-12 20:15:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:36.599175 | orchestrator | 2025-07-12 20:15:36 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:36.599950 | orchestrator | 2025-07-12 20:15:36 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:36.601002 | orchestrator | 2025-07-12 20:15:36 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:36.602012 | orchestrator | 2025-07-12 20:15:36 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:36.602278 | orchestrator | 2025-07-12 20:15:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:39.637898 | orchestrator | 2025-07-12 20:15:39 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:39.638224 | orchestrator | 2025-07-12 20:15:39 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:39.639012 | orchestrator | 2025-07-12 20:15:39 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:39.639606 | orchestrator | 2025-07-12 20:15:39 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state STARTED 2025-07-12 20:15:39.640129 | orchestrator | 2025-07-12 20:15:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:42.682887 | orchestrator | 2025-07-12 20:15:42 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:42.683002 | orchestrator | 2025-07-12 20:15:42 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:15:42.689103 | orchestrator | 2025-07-12 20:15:42 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:42.689854 | orchestrator | 2025-07-12 20:15:42 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:42.690324 | orchestrator | 2025-07-12 20:15:42 | INFO  | Task 03f88d65-5d32-4950-8ab5-fd82992800a4 is in state SUCCESS 2025-07-12 20:15:42.690352 | orchestrator | 2025-07-12 20:15:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:45.728202 | orchestrator | 2025-07-12 20:15:45 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:45.729851 | orchestrator | 2025-07-12 20:15:45 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:15:45.732098 | orchestrator | 2025-07-12 20:15:45 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:45.733426 | orchestrator | 2025-07-12 20:15:45 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:45.733741 | orchestrator | 2025-07-12 20:15:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:48.773968 | orchestrator | 2025-07-12 20:15:48 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:48.774191 | orchestrator | 2025-07-12 20:15:48 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:15:48.774211 | orchestrator | 2025-07-12 20:15:48 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:48.774223 | orchestrator | 2025-07-12 20:15:48 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:48.774235 | orchestrator | 2025-07-12 20:15:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:51.806995 | orchestrator | 2025-07-12 20:15:51 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:51.808034 | orchestrator | 2025-07-12 20:15:51 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:15:51.809974 | orchestrator | 2025-07-12 20:15:51 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:51.811169 | orchestrator | 2025-07-12 20:15:51 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:51.811809 | orchestrator | 2025-07-12 20:15:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:54.859582 | orchestrator | 2025-07-12 20:15:54 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:54.860755 | orchestrator | 2025-07-12 20:15:54 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:15:54.863301 | orchestrator | 2025-07-12 20:15:54 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:54.865188 | orchestrator | 2025-07-12 20:15:54 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:54.865278 | orchestrator | 2025-07-12 20:15:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:57.911184 | orchestrator | 2025-07-12 20:15:57 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:15:57.911697 | orchestrator | 2025-07-12 20:15:57 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:15:57.912650 | orchestrator | 2025-07-12 20:15:57 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:15:57.914665 | orchestrator | 2025-07-12 20:15:57 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:15:57.914743 | orchestrator | 2025-07-12 20:15:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:00.955793 | orchestrator | 2025-07-12 20:16:00 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:00.956138 | orchestrator | 2025-07-12 20:16:00 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:00.956728 | orchestrator | 2025-07-12 20:16:00 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:00.957240 | orchestrator | 2025-07-12 20:16:00 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:00.957260 | orchestrator | 2025-07-12 20:16:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:03.994261 | orchestrator | 2025-07-12 20:16:03 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:03.994427 | orchestrator | 2025-07-12 20:16:03 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:03.994794 | orchestrator | 2025-07-12 20:16:03 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:03.995357 | orchestrator | 2025-07-12 20:16:03 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:03.996174 | orchestrator | 2025-07-12 20:16:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:07.044677 | orchestrator | 2025-07-12 20:16:07 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:07.044988 | orchestrator | 2025-07-12 20:16:07 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:07.045509 | orchestrator | 2025-07-12 20:16:07 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:07.045989 | orchestrator | 2025-07-12 20:16:07 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:07.046165 | orchestrator | 2025-07-12 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:10.073354 | orchestrator | 2025-07-12 20:16:10 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:10.073576 | orchestrator | 2025-07-12 20:16:10 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:10.073902 | orchestrator | 2025-07-12 20:16:10 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:10.074654 | orchestrator | 2025-07-12 20:16:10 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:10.074705 | orchestrator | 2025-07-12 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:13.109373 | orchestrator | 2025-07-12 20:16:13 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:13.111535 | orchestrator | 2025-07-12 20:16:13 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:13.115260 | orchestrator | 2025-07-12 20:16:13 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:13.115920 | orchestrator | 2025-07-12 20:16:13 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:13.116152 | orchestrator | 2025-07-12 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:16.151223 | orchestrator | 2025-07-12 20:16:16 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:16.153146 | orchestrator | 2025-07-12 20:16:16 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:16.155394 | orchestrator | 2025-07-12 20:16:16 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:16.157214 | orchestrator | 2025-07-12 20:16:16 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:16.157594 | orchestrator | 2025-07-12 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:19.200821 | orchestrator | 2025-07-12 20:16:19 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:19.203386 | orchestrator | 2025-07-12 20:16:19 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:19.204801 | orchestrator | 2025-07-12 20:16:19 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:19.206532 | orchestrator | 2025-07-12 20:16:19 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:19.206567 | orchestrator | 2025-07-12 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:22.244740 | orchestrator | 2025-07-12 20:16:22 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:22.245876 | orchestrator | 2025-07-12 20:16:22 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:22.246864 | orchestrator | 2025-07-12 20:16:22 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:22.248830 | orchestrator | 2025-07-12 20:16:22 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:22.248885 | orchestrator | 2025-07-12 20:16:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:25.281434 | orchestrator | 2025-07-12 20:16:25 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:25.281868 | orchestrator | 2025-07-12 20:16:25 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:25.283363 | orchestrator | 2025-07-12 20:16:25 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:25.283826 | orchestrator | 2025-07-12 20:16:25 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:25.284056 | orchestrator | 2025-07-12 20:16:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:28.323972 | orchestrator | 2025-07-12 20:16:28 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:28.326325 | orchestrator | 2025-07-12 20:16:28 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:28.326371 | orchestrator | 2025-07-12 20:16:28 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:28.326785 | orchestrator | 2025-07-12 20:16:28 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:28.327046 | orchestrator | 2025-07-12 20:16:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:31.355395 | orchestrator | 2025-07-12 20:16:31 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:31.358431 | orchestrator | 2025-07-12 20:16:31 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:31.358493 | orchestrator | 2025-07-12 20:16:31 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:31.360575 | orchestrator | 2025-07-12 20:16:31 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:31.360719 | orchestrator | 2025-07-12 20:16:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:34.413300 | orchestrator | 2025-07-12 20:16:34 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:34.415504 | orchestrator | 2025-07-12 20:16:34 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:34.417971 | orchestrator | 2025-07-12 20:16:34 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:34.423870 | orchestrator | 2025-07-12 20:16:34 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:34.425889 | orchestrator | 2025-07-12 20:16:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:37.484718 | orchestrator | 2025-07-12 20:16:37 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:37.486858 | orchestrator | 2025-07-12 20:16:37 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:37.488236 | orchestrator | 2025-07-12 20:16:37 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:37.489717 | orchestrator | 2025-07-12 20:16:37 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:37.490044 | orchestrator | 2025-07-12 20:16:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:40.539417 | orchestrator | 2025-07-12 20:16:40 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:40.539770 | orchestrator | 2025-07-12 20:16:40 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:40.543252 | orchestrator | 2025-07-12 20:16:40 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:40.544181 | orchestrator | 2025-07-12 20:16:40 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:40.544242 | orchestrator | 2025-07-12 20:16:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:43.583908 | orchestrator | 2025-07-12 20:16:43 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:43.584072 | orchestrator | 2025-07-12 20:16:43 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:43.585301 | orchestrator | 2025-07-12 20:16:43 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:43.585745 | orchestrator | 2025-07-12 20:16:43 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:43.585784 | orchestrator | 2025-07-12 20:16:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:46.642896 | orchestrator | 2025-07-12 20:16:46 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state STARTED 2025-07-12 20:16:46.643076 | orchestrator | 2025-07-12 20:16:46 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:46.643851 | orchestrator | 2025-07-12 20:16:46 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:46.644528 | orchestrator | 2025-07-12 20:16:46 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:46.644852 | orchestrator | 2025-07-12 20:16:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:49.691697 | orchestrator | 2025-07-12 20:16:49 | INFO  | Task cdbed458-b4bf-493a-8b4e-550b6ef897d0 is in state SUCCESS 2025-07-12 20:16:49.691877 | orchestrator | 2025-07-12 20:16:49.691899 | orchestrator | 2025-07-12 20:16:49.691912 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-12 20:16:49.691925 | orchestrator | 2025-07-12 20:16:49.692011 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-12 20:16:49.692023 | orchestrator | Saturday 12 July 2025 20:15:01 +0000 (0:00:00.223) 0:00:00.223 ********* 2025-07-12 20:16:49.692034 | orchestrator | changed: [localhost] 2025-07-12 20:16:49.692046 | orchestrator | 2025-07-12 20:16:49.692058 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-12 20:16:49.692069 | orchestrator | Saturday 12 July 2025 20:15:02 +0000 (0:00:00.891) 0:00:01.115 ********* 2025-07-12 20:16:49.692079 | orchestrator | changed: [localhost] 2025-07-12 20:16:49.692090 | orchestrator | 2025-07-12 20:16:49.692101 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-12 20:16:49.692112 | orchestrator | Saturday 12 July 2025 20:15:33 +0000 (0:00:31.275) 0:00:32.391 ********* 2025-07-12 20:16:49.692123 | orchestrator | changed: [localhost] 2025-07-12 20:16:49.692134 | orchestrator | 2025-07-12 20:16:49.692161 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:16:49.692173 | orchestrator | 2025-07-12 20:16:49.692184 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:16:49.692195 | orchestrator | Saturday 12 July 2025 20:15:38 +0000 (0:00:04.494) 0:00:36.886 ********* 2025-07-12 20:16:49.692206 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:49.692217 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:49.692228 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:49.692238 | orchestrator | 2025-07-12 20:16:49.692249 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:16:49.692368 | orchestrator | Saturday 12 July 2025 20:15:38 +0000 (0:00:00.371) 0:00:37.258 ********* 2025-07-12 20:16:49.692383 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-12 20:16:49.692394 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-12 20:16:49.692405 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-12 20:16:49.692416 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-12 20:16:49.692427 | orchestrator | 2025-07-12 20:16:49.692438 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-12 20:16:49.692449 | orchestrator | skipping: no hosts matched 2025-07-12 20:16:49.692460 | orchestrator | 2025-07-12 20:16:49.692471 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:49.692482 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:49.692516 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:49.692531 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:49.692543 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:49.692556 | orchestrator | 2025-07-12 20:16:49.692568 | orchestrator | 2025-07-12 20:16:49.692580 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:49.692593 | orchestrator | Saturday 12 July 2025 20:15:39 +0000 (0:00:00.952) 0:00:38.210 ********* 2025-07-12 20:16:49.692605 | orchestrator | =============================================================================== 2025-07-12 20:16:49.692617 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.28s 2025-07-12 20:16:49.692629 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.49s 2025-07-12 20:16:49.692642 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2025-07-12 20:16:49.692654 | orchestrator | Ensure the destination directory exists --------------------------------- 0.89s 2025-07-12 20:16:49.692665 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.37s 2025-07-12 20:16:49.692677 | orchestrator | 2025-07-12 20:16:49.693796 | orchestrator | 2025-07-12 20:16:49.693867 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:16:49.693894 | orchestrator | 2025-07-12 20:16:49.693906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:16:49.693928 | orchestrator | Saturday 12 July 2025 20:12:24 +0000 (0:00:00.314) 0:00:00.314 ********* 2025-07-12 20:16:49.693940 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:49.694004 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:49.694062 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:49.694105 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:16:49.694118 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:16:49.694129 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:16:49.694139 | orchestrator | 2025-07-12 20:16:49.694151 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:16:49.694162 | orchestrator | Saturday 12 July 2025 20:12:26 +0000 (0:00:01.579) 0:00:01.894 ********* 2025-07-12 20:16:49.694172 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-12 20:16:49.694183 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-12 20:16:49.694194 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-12 20:16:49.694205 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-12 20:16:49.694215 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-12 20:16:49.694226 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-12 20:16:49.694237 | orchestrator | 2025-07-12 20:16:49.694247 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-12 20:16:49.694258 | orchestrator | 2025-07-12 20:16:49.694269 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:16:49.694279 | orchestrator | Saturday 12 July 2025 20:12:27 +0000 (0:00:01.138) 0:00:03.032 ********* 2025-07-12 20:16:49.694291 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:16:49.694302 | orchestrator | 2025-07-12 20:16:49.694313 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-12 20:16:49.694324 | orchestrator | Saturday 12 July 2025 20:12:29 +0000 (0:00:01.646) 0:00:04.679 ********* 2025-07-12 20:16:49.694335 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:49.694345 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:49.694356 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:49.694383 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:16:49.694396 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:16:49.694408 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:16:49.694421 | orchestrator | 2025-07-12 20:16:49.694433 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-12 20:16:49.694446 | orchestrator | Saturday 12 July 2025 20:12:30 +0000 (0:00:01.234) 0:00:05.914 ********* 2025-07-12 20:16:49.694467 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:49.694480 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:49.694492 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:49.694505 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:16:49.694516 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:16:49.694528 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:16:49.694540 | orchestrator | 2025-07-12 20:16:49.694553 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-12 20:16:49.694565 | orchestrator | Saturday 12 July 2025 20:12:31 +0000 (0:00:00.989) 0:00:06.904 ********* 2025-07-12 20:16:49.694577 | orchestrator | ok: [testbed-node-0] => { 2025-07-12 20:16:49.694590 | orchestrator |  "changed": false, 2025-07-12 20:16:49.694602 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:16:49.694614 | orchestrator | } 2025-07-12 20:16:49.694627 | orchestrator | ok: [testbed-node-1] => { 2025-07-12 20:16:49.694639 | orchestrator |  "changed": false, 2025-07-12 20:16:49.694651 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:16:49.694663 | orchestrator | } 2025-07-12 20:16:49.694676 | orchestrator | ok: [testbed-node-2] => { 2025-07-12 20:16:49.694688 | orchestrator |  "changed": false, 2025-07-12 20:16:49.694700 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:16:49.694712 | orchestrator | } 2025-07-12 20:16:49.694724 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:16:49.694736 | orchestrator |  "changed": false, 2025-07-12 20:16:49.694749 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:16:49.694760 | orchestrator | } 2025-07-12 20:16:49.694771 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:16:49.694782 | orchestrator |  "changed": false, 2025-07-12 20:16:49.694792 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:16:49.694803 | orchestrator | } 2025-07-12 20:16:49.694814 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:16:49.694824 | orchestrator |  "changed": false, 2025-07-12 20:16:49.694835 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:16:49.694846 | orchestrator | } 2025-07-12 20:16:49.694857 | orchestrator | 2025-07-12 20:16:49.694867 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-12 20:16:49.694879 | orchestrator | Saturday 12 July 2025 20:12:31 +0000 (0:00:00.694) 0:00:07.599 ********* 2025-07-12 20:16:49.694890 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.694900 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.694911 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.694922 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.694932 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.694943 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.694953 | orchestrator | 2025-07-12 20:16:49.694980 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-12 20:16:49.694992 | orchestrator | Saturday 12 July 2025 20:12:32 +0000 (0:00:00.608) 0:00:08.207 ********* 2025-07-12 20:16:49.695002 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-12 20:16:49.695013 | orchestrator | 2025-07-12 20:16:49.695024 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-12 20:16:49.695035 | orchestrator | Saturday 12 July 2025 20:12:35 +0000 (0:00:03.024) 0:00:11.232 ********* 2025-07-12 20:16:49.695045 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-12 20:16:49.695056 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-12 20:16:49.695067 | orchestrator | 2025-07-12 20:16:49.695091 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-12 20:16:49.695110 | orchestrator | Saturday 12 July 2025 20:12:41 +0000 (0:00:06.236) 0:00:17.469 ********* 2025-07-12 20:16:49.695121 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:16:49.695132 | orchestrator | 2025-07-12 20:16:49.695155 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-12 20:16:49.695166 | orchestrator | Saturday 12 July 2025 20:12:45 +0000 (0:00:03.322) 0:00:20.791 ********* 2025-07-12 20:16:49.695177 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:16:49.695188 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-12 20:16:49.695199 | orchestrator | 2025-07-12 20:16:49.695210 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-12 20:16:49.695220 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:03.915) 0:00:24.707 ********* 2025-07-12 20:16:49.695231 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:16:49.695242 | orchestrator | 2025-07-12 20:16:49.695252 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-12 20:16:49.695263 | orchestrator | Saturday 12 July 2025 20:12:52 +0000 (0:00:03.227) 0:00:27.934 ********* 2025-07-12 20:16:49.695274 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-12 20:16:49.695284 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-12 20:16:49.695295 | orchestrator | 2025-07-12 20:16:49.695306 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:16:49.695317 | orchestrator | Saturday 12 July 2025 20:13:00 +0000 (0:00:08.043) 0:00:35.978 ********* 2025-07-12 20:16:49.695327 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.695338 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.695349 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.695360 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.695370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.695381 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.695392 | orchestrator | 2025-07-12 20:16:49.695402 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-12 20:16:49.695413 | orchestrator | Saturday 12 July 2025 20:13:01 +0000 (0:00:00.822) 0:00:36.800 ********* 2025-07-12 20:16:49.695424 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.695435 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.695445 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.695456 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.695467 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.695477 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.695488 | orchestrator | 2025-07-12 20:16:49.695499 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-12 20:16:49.695510 | orchestrator | Saturday 12 July 2025 20:13:03 +0000 (0:00:02.848) 0:00:39.648 ********* 2025-07-12 20:16:49.695532 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:49.695551 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:49.695568 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:49.695586 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:16:49.695604 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:16:49.695623 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:16:49.695640 | orchestrator | 2025-07-12 20:16:49.695661 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 20:16:49.695672 | orchestrator | Saturday 12 July 2025 20:13:05 +0000 (0:00:01.219) 0:00:40.867 ********* 2025-07-12 20:16:49.695684 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.695694 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.695705 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.695716 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.695727 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.695737 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.695748 | orchestrator | 2025-07-12 20:16:49.695768 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-12 20:16:49.695779 | orchestrator | Saturday 12 July 2025 20:13:07 +0000 (0:00:02.419) 0:00:43.287 ********* 2025-07-12 20:16:49.695794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.695819 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.695832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.695850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.695863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.695881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.695893 | orchestrator | 2025-07-12 20:16:49.695905 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-12 20:16:49.695916 | orchestrator | Saturday 12 July 2025 20:13:10 +0000 (0:00:02.887) 0:00:46.174 ********* 2025-07-12 20:16:49.695927 | orchestrator | [WARNING]: Skipped 2025-07-12 20:16:49.695938 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-12 20:16:49.695950 | orchestrator | due to this access issue: 2025-07-12 20:16:49.695981 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-12 20:16:49.695995 | orchestrator | a directory 2025-07-12 20:16:49.696006 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:16:49.696017 | orchestrator | 2025-07-12 20:16:49.696028 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:16:49.696068 | orchestrator | Saturday 12 July 2025 20:13:11 +0000 (0:00:00.884) 0:00:47.058 ********* 2025-07-12 20:16:49.696091 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:16:49.696110 | orchestrator | 2025-07-12 20:16:49.696129 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-12 20:16:49.696148 | orchestrator | Saturday 12 July 2025 20:13:12 +0000 (0:00:01.223) 0:00:48.282 ********* 2025-07-12 20:16:49.696169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.696199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.696228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.696241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.696263 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.696275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.696287 | orchestrator | 2025-07-12 20:16:49.696298 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-12 20:16:49.696309 | orchestrator | Saturday 12 July 2025 20:13:15 +0000 (0:00:03.195) 0:00:51.478 ********* 2025-07-12 20:16:49.696326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.696344 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.696356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.696368 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.696379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.696391 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.696409 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.696421 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.696432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.696450 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.696461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.696473 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.696492 | orchestrator | 2025-07-12 20:16:49.696509 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-12 20:16:49.696520 | orchestrator | Saturday 12 July 2025 20:13:18 +0000 (0:00:02.586) 0:00:54.065 ********* 2025-07-12 20:16:49.696631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.696655 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.696676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.696688 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.696699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.696718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.696734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.696746 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.696757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.696768 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.696779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.696791 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.696801 | orchestrator | 2025-07-12 20:16:49.696812 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-12 20:16:49.696823 | orchestrator | Saturday 12 July 2025 20:13:21 +0000 (0:00:02.915) 0:00:56.980 ********* 2025-07-12 20:16:49.696834 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.696845 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.696856 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.696866 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.696877 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.696888 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.696899 | orchestrator | 2025-07-12 20:16:49.696909 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-12 20:16:49.696926 | orchestrator | Saturday 12 July 2025 20:13:23 +0000 (0:00:02.349) 0:00:59.329 ********* 2025-07-12 20:16:49.696937 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.696948 | orchestrator | 2025-07-12 20:16:49.696959 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-12 20:16:49.697008 | orchestrator | Saturday 12 July 2025 20:13:23 +0000 (0:00:00.131) 0:00:59.462 ********* 2025-07-12 20:16:49.697019 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.697037 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.697047 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.697058 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.697069 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.697079 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.697090 | orchestrator | 2025-07-12 20:16:49.697101 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-12 20:16:49.697112 | orchestrator | Saturday 12 July 2025 20:13:24 +0000 (0:00:00.881) 0:01:00.343 ********* 2025-07-12 20:16:49.697124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.697136 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.697162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.697182 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.697200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.697219 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.697237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.697268 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.697757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.697778 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.697790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.697801 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.697813 | orchestrator | 2025-07-12 20:16:49.697824 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-12 20:16:49.697835 | orchestrator | Saturday 12 July 2025 20:13:28 +0000 (0:00:03.718) 0:01:04.061 ********* 2025-07-12 20:16:49.697854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.697866 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.697886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.697908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.697920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.697936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.697947 | orchestrator | 2025-07-12 20:16:49.697959 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-12 20:16:49.698005 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:04.484) 0:01:08.545 ********* 2025-07-12 20:16:49.698072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698115 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.698132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.698155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.698173 | orchestrator | 2025-07-12 20:16:49.698184 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-12 20:16:49.698195 | orchestrator | Saturday 12 July 2025 20:13:42 +0000 (0:00:09.415) 0:01:17.961 ********* 2025-07-12 20:16:49.698212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.698224 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.698246 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.698274 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.698285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698334 | orchestrator | 2025-07-12 20:16:49.698345 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-12 20:16:49.698356 | orchestrator | Saturday 12 July 2025 20:13:46 +0000 (0:00:03.936) 0:01:21.897 ********* 2025-07-12 20:16:49.698367 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:49.698378 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698388 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.698399 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:49.698410 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698420 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:49.698431 | orchestrator | 2025-07-12 20:16:49.698442 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-12 20:16:49.698453 | orchestrator | Saturday 12 July 2025 20:13:49 +0000 (0:00:03.658) 0:01:25.556 ********* 2025-07-12 20:16:49.698464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.698484 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.698514 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.698536 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.698553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.698593 | orchestrator | 2025-07-12 20:16:49.698604 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-12 20:16:49.698615 | orchestrator | Saturday 12 July 2025 20:13:54 +0000 (0:00:04.805) 0:01:30.361 ********* 2025-07-12 20:16:49.698633 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.698644 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.698654 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698665 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.698675 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698686 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.698697 | orchestrator | 2025-07-12 20:16:49.698708 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-12 20:16:49.698718 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:03.560) 0:01:33.922 ********* 2025-07-12 20:16:49.698729 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.698740 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.698750 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.698761 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698771 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.698782 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698793 | orchestrator | 2025-07-12 20:16:49.698803 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-12 20:16:49.698814 | orchestrator | Saturday 12 July 2025 20:14:01 +0000 (0:00:03.338) 0:01:37.261 ********* 2025-07-12 20:16:49.698825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698835 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.698846 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.698857 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698867 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.698878 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.698888 | orchestrator | 2025-07-12 20:16:49.698899 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-12 20:16:49.698910 | orchestrator | Saturday 12 July 2025 20:14:04 +0000 (0:00:02.526) 0:01:39.788 ********* 2025-07-12 20:16:49.698920 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.698931 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.698942 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.698953 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.698985 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.698998 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699009 | orchestrator | 2025-07-12 20:16:49.699020 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-12 20:16:49.699031 | orchestrator | Saturday 12 July 2025 20:14:06 +0000 (0:00:02.873) 0:01:42.662 ********* 2025-07-12 20:16:49.699041 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699052 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699063 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.699073 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.699084 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.699095 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699106 | orchestrator | 2025-07-12 20:16:49.699122 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-12 20:16:49.699134 | orchestrator | Saturday 12 July 2025 20:14:09 +0000 (0:00:02.080) 0:01:44.742 ********* 2025-07-12 20:16:49.699145 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.699156 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699166 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.699177 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699188 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699198 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.699209 | orchestrator | 2025-07-12 20:16:49.699220 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-12 20:16:49.699230 | orchestrator | Saturday 12 July 2025 20:14:11 +0000 (0:00:02.090) 0:01:46.833 ********* 2025-07-12 20:16:49.699241 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:16:49.699259 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699270 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:16:49.699281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.699292 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:16:49.699303 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699313 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:16:49.699326 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.699344 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:16:49.699363 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.699382 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:16:49.699401 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699414 | orchestrator | 2025-07-12 20:16:49.699425 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-12 20:16:49.699436 | orchestrator | Saturday 12 July 2025 20:14:13 +0000 (0:00:02.061) 0:01:48.894 ********* 2025-07-12 20:16:49.699453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.699465 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.699477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.699488 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.699524 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.699548 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.699559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.699570 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.699598 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.699609 | orchestrator | 2025-07-12 20:16:49.699620 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-12 20:16:49.699631 | orchestrator | Saturday 12 July 2025 20:14:16 +0000 (0:00:02.854) 0:01:51.749 ********* 2025-07-12 20:16:49.699643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.699654 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.699671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.699690 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.699712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.699740 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.699751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.699763 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.699791 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.699802 | orchestrator | 2025-07-12 20:16:49.699813 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-12 20:16:49.699824 | orchestrator | Saturday 12 July 2025 20:14:18 +0000 (0:00:02.890) 0:01:54.640 ********* 2025-07-12 20:16:49.699835 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.699846 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699857 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699868 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.699878 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.699896 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.699907 | orchestrator | 2025-07-12 20:16:49.699918 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-12 20:16:49.699929 | orchestrator | Saturday 12 July 2025 20:14:21 +0000 (0:00:02.372) 0:01:57.013 ********* 2025-07-12 20:16:49.699940 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.699951 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.699981 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700002 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:49.700022 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:49.700041 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:49.700057 | orchestrator | 2025-07-12 20:16:49.700068 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-12 20:16:49.700079 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:03.234) 0:02:00.247 ********* 2025-07-12 20:16:49.700090 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700100 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700111 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700122 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700132 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700143 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700153 | orchestrator | 2025-07-12 20:16:49.700164 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-12 20:16:49.700175 | orchestrator | Saturday 12 July 2025 20:14:28 +0000 (0:00:03.897) 0:02:04.145 ********* 2025-07-12 20:16:49.700186 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700196 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700207 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700218 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700228 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700239 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700249 | orchestrator | 2025-07-12 20:16:49.700260 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-12 20:16:49.700271 | orchestrator | Saturday 12 July 2025 20:14:31 +0000 (0:00:03.215) 0:02:07.360 ********* 2025-07-12 20:16:49.700281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700292 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700303 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700313 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700324 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700335 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700345 | orchestrator | 2025-07-12 20:16:49.700356 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-12 20:16:49.700366 | orchestrator | Saturday 12 July 2025 20:14:34 +0000 (0:00:02.870) 0:02:10.231 ********* 2025-07-12 20:16:49.700377 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700388 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700398 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700415 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700425 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700436 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700454 | orchestrator | 2025-07-12 20:16:49.700465 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-12 20:16:49.700476 | orchestrator | Saturday 12 July 2025 20:14:36 +0000 (0:00:02.095) 0:02:12.327 ********* 2025-07-12 20:16:49.700486 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700497 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700508 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700518 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700529 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700539 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700550 | orchestrator | 2025-07-12 20:16:49.700561 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-12 20:16:49.700571 | orchestrator | Saturday 12 July 2025 20:14:40 +0000 (0:00:04.054) 0:02:16.381 ********* 2025-07-12 20:16:49.700582 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700593 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700603 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700614 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700625 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700635 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700646 | orchestrator | 2025-07-12 20:16:49.700657 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-12 20:16:49.700667 | orchestrator | Saturday 12 July 2025 20:14:43 +0000 (0:00:02.928) 0:02:19.310 ********* 2025-07-12 20:16:49.700678 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700689 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700699 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700720 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700731 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700742 | orchestrator | 2025-07-12 20:16:49.700752 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-12 20:16:49.700763 | orchestrator | Saturday 12 July 2025 20:14:48 +0000 (0:00:04.366) 0:02:23.676 ********* 2025-07-12 20:16:49.700774 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700785 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700795 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.700806 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700816 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.700827 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.700837 | orchestrator | 2025-07-12 20:16:49.700848 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-12 20:16:49.700859 | orchestrator | Saturday 12 July 2025 20:14:50 +0000 (0:00:02.088) 0:02:25.765 ********* 2025-07-12 20:16:49.700870 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:16:49.700880 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.700891 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:16:49.700902 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.700919 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:16:49.700930 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.700941 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:16:49.700952 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.701019 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:16:49.701034 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.701046 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:16:49.701058 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.701076 | orchestrator | 2025-07-12 20:16:49.701087 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-12 20:16:49.701098 | orchestrator | Saturday 12 July 2025 20:14:54 +0000 (0:00:04.411) 0:02:30.176 ********* 2025-07-12 20:16:49.701141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.701154 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.701172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.701185 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.701196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.701208 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.701227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:16:49.701240 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.701262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.701274 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.701285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:16:49.701297 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.701307 | orchestrator | 2025-07-12 20:16:49.701319 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-12 20:16:49.701329 | orchestrator | Saturday 12 July 2025 20:14:58 +0000 (0:00:03.656) 0:02:33.832 ********* 2025-07-12 20:16:49.701346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.701359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.701379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:16:49.701399 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.701416 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.701428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:16:49.701440 | orchestrator | 2025-07-12 20:16:49.701452 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:16:49.701464 | orchestrator | Saturday 12 July 2025 20:15:02 +0000 (0:00:04.092) 0:02:37.924 ********* 2025-07-12 20:16:49.701475 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:49.701486 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:49.701496 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:49.701506 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:49.701516 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:49.701526 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:49.701536 | orchestrator | 2025-07-12 20:16:49.701546 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-12 20:16:49.701556 | orchestrator | Saturday 12 July 2025 20:15:02 +0000 (0:00:00.706) 0:02:38.631 ********* 2025-07-12 20:16:49.701566 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:49.701576 | orchestrator | 2025-07-12 20:16:49.701586 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-12 20:16:49.701596 | orchestrator | Saturday 12 July 2025 20:15:05 +0000 (0:00:02.262) 0:02:40.893 ********* 2025-07-12 20:16:49.701613 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:49.701623 | orchestrator | 2025-07-12 20:16:49.701633 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-12 20:16:49.701643 | orchestrator | Saturday 12 July 2025 20:15:07 +0000 (0:00:02.327) 0:02:43.220 ********* 2025-07-12 20:16:49.701653 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:49.701663 | orchestrator | 2025-07-12 20:16:49.701673 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:16:49.701683 | orchestrator | Saturday 12 July 2025 20:15:56 +0000 (0:00:49.227) 0:03:32.448 ********* 2025-07-12 20:16:49.701693 | orchestrator | 2025-07-12 20:16:49.701703 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:16:49.701714 | orchestrator | Saturday 12 July 2025 20:15:56 +0000 (0:00:00.101) 0:03:32.549 ********* 2025-07-12 20:16:49.701723 | orchestrator | 2025-07-12 20:16:49.701733 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:16:49.701750 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:00.552) 0:03:33.101 ********* 2025-07-12 20:16:49.701760 | orchestrator | 2025-07-12 20:16:49.701771 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:16:49.701781 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:00.115) 0:03:33.217 ********* 2025-07-12 20:16:49.701791 | orchestrator | 2025-07-12 20:16:49.701801 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:16:49.701811 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:00.076) 0:03:33.293 ********* 2025-07-12 20:16:49.701822 | orchestrator | 2025-07-12 20:16:49.701832 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:16:49.701842 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:00.132) 0:03:33.426 ********* 2025-07-12 20:16:49.701852 | orchestrator | 2025-07-12 20:16:49.701862 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-12 20:16:49.701872 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:00.078) 0:03:33.505 ********* 2025-07-12 20:16:49.701882 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:49.701892 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:49.701902 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:49.701912 | orchestrator | 2025-07-12 20:16:49.701923 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-12 20:16:49.701933 | orchestrator | Saturday 12 July 2025 20:16:25 +0000 (0:00:27.820) 0:04:01.325 ********* 2025-07-12 20:16:49.701943 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:49.701953 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:49.701982 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:49.702000 | orchestrator | 2025-07-12 20:16:49.702054 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:49.702067 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 20:16:49.702078 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 20:16:49.702088 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 20:16:49.702098 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 20:16:49.702113 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 20:16:49.702123 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 20:16:49.702143 | orchestrator | 2025-07-12 20:16:49.702153 | orchestrator | 2025-07-12 20:16:49.702163 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:49.702173 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:21.410) 0:04:22.736 ********* 2025-07-12 20:16:49.702182 | orchestrator | =============================================================================== 2025-07-12 20:16:49.702192 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 49.23s 2025-07-12 20:16:49.702201 | orchestrator | neutron : Restart neutron-server container ----------------------------- 27.82s 2025-07-12 20:16:49.702211 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 21.41s 2025-07-12 20:16:49.702220 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 9.42s 2025-07-12 20:16:49.702230 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.04s 2025-07-12 20:16:49.702239 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.24s 2025-07-12 20:16:49.702249 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.81s 2025-07-12 20:16:49.702259 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.48s 2025-07-12 20:16:49.702268 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.41s 2025-07-12 20:16:49.702278 | orchestrator | neutron : Copy neutron-l3-agent-wrapper script -------------------------- 4.37s 2025-07-12 20:16:49.702287 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.09s 2025-07-12 20:16:49.702297 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 4.05s 2025-07-12 20:16:49.702306 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.94s 2025-07-12 20:16:49.702316 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.92s 2025-07-12 20:16:49.702326 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 3.90s 2025-07-12 20:16:49.702335 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.72s 2025-07-12 20:16:49.702345 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.66s 2025-07-12 20:16:49.702354 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.66s 2025-07-12 20:16:49.702364 | orchestrator | neutron : Copying over linuxbridge_agent.ini ---------------------------- 3.56s 2025-07-12 20:16:49.702374 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 3.34s 2025-07-12 20:16:49.702390 | orchestrator | 2025-07-12 20:16:49 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:49.702400 | orchestrator | 2025-07-12 20:16:49 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:49.702411 | orchestrator | 2025-07-12 20:16:49 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:49.702420 | orchestrator | 2025-07-12 20:16:49 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:16:49.702430 | orchestrator | 2025-07-12 20:16:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:52.758797 | orchestrator | 2025-07-12 20:16:52 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state STARTED 2025-07-12 20:16:52.760658 | orchestrator | 2025-07-12 20:16:52 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:52.762375 | orchestrator | 2025-07-12 20:16:52 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:52.764612 | orchestrator | 2025-07-12 20:16:52 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:16:52.764679 | orchestrator | 2025-07-12 20:16:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:55.818068 | orchestrator | 2025-07-12 20:16:55 | INFO  | Task c1eb81ec-90d1-4195-9715-7f3a554ba224 is in state SUCCESS 2025-07-12 20:16:55.818197 | orchestrator | 2025-07-12 20:16:55 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:55.819792 | orchestrator | 2025-07-12 20:16:55.819838 | orchestrator | 2025-07-12 20:16:55.819850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:16:55.819863 | orchestrator | 2025-07-12 20:16:55.819874 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:16:55.819886 | orchestrator | Saturday 12 July 2025 20:15:44 +0000 (0:00:00.335) 0:00:00.335 ********* 2025-07-12 20:16:55.819897 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:55.819909 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:55.819920 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:55.819931 | orchestrator | 2025-07-12 20:16:55.819942 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:16:55.819975 | orchestrator | Saturday 12 July 2025 20:15:45 +0000 (0:00:00.515) 0:00:00.851 ********* 2025-07-12 20:16:55.819987 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-12 20:16:55.820013 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-12 20:16:55.820025 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-12 20:16:55.820037 | orchestrator | 2025-07-12 20:16:55.820048 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-12 20:16:55.820059 | orchestrator | 2025-07-12 20:16:55.820070 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 20:16:55.820081 | orchestrator | Saturday 12 July 2025 20:15:45 +0000 (0:00:00.443) 0:00:01.295 ********* 2025-07-12 20:16:55.820092 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:16:55.820104 | orchestrator | 2025-07-12 20:16:55.820115 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-12 20:16:55.820126 | orchestrator | Saturday 12 July 2025 20:15:46 +0000 (0:00:00.549) 0:00:01.844 ********* 2025-07-12 20:16:55.820137 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-12 20:16:55.820148 | orchestrator | 2025-07-12 20:16:55.820159 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-12 20:16:55.820170 | orchestrator | Saturday 12 July 2025 20:15:50 +0000 (0:00:03.945) 0:00:05.789 ********* 2025-07-12 20:16:55.820181 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-12 20:16:55.820193 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-12 20:16:55.820204 | orchestrator | 2025-07-12 20:16:55.820215 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-12 20:16:55.820225 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:06.677) 0:00:12.467 ********* 2025-07-12 20:16:55.820236 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:16:55.820247 | orchestrator | 2025-07-12 20:16:55.820258 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-12 20:16:55.820269 | orchestrator | Saturday 12 July 2025 20:16:00 +0000 (0:00:03.367) 0:00:15.835 ********* 2025-07-12 20:16:55.820280 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:16:55.820291 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-12 20:16:55.820302 | orchestrator | 2025-07-12 20:16:55.820312 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-12 20:16:55.820323 | orchestrator | Saturday 12 July 2025 20:16:04 +0000 (0:00:04.074) 0:00:19.909 ********* 2025-07-12 20:16:55.820334 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:16:55.820345 | orchestrator | 2025-07-12 20:16:55.820355 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-12 20:16:55.820367 | orchestrator | Saturday 12 July 2025 20:16:08 +0000 (0:00:03.614) 0:00:23.524 ********* 2025-07-12 20:16:55.820399 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-12 20:16:55.820412 | orchestrator | 2025-07-12 20:16:55.820424 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 20:16:55.820436 | orchestrator | Saturday 12 July 2025 20:16:12 +0000 (0:00:04.127) 0:00:27.651 ********* 2025-07-12 20:16:55.820448 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:55.820460 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:55.820472 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:55.820484 | orchestrator | 2025-07-12 20:16:55.820496 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-12 20:16:55.820509 | orchestrator | Saturday 12 July 2025 20:16:12 +0000 (0:00:00.225) 0:00:27.876 ********* 2025-07-12 20:16:55.820524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.820562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.820576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.820590 | orchestrator | 2025-07-12 20:16:55.820602 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-12 20:16:55.820614 | orchestrator | Saturday 12 July 2025 20:16:13 +0000 (0:00:00.796) 0:00:28.673 ********* 2025-07-12 20:16:55.820625 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:55.820636 | orchestrator | 2025-07-12 20:16:55.820647 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-12 20:16:55.820658 | orchestrator | Saturday 12 July 2025 20:16:13 +0000 (0:00:00.136) 0:00:28.809 ********* 2025-07-12 20:16:55.820676 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:55.820687 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:55.820698 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:55.820709 | orchestrator | 2025-07-12 20:16:55.820720 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 20:16:55.820731 | orchestrator | Saturday 12 July 2025 20:16:13 +0000 (0:00:00.440) 0:00:29.250 ********* 2025-07-12 20:16:55.820741 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:16:55.820752 | orchestrator | 2025-07-12 20:16:55.820763 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-12 20:16:55.820774 | orchestrator | Saturday 12 July 2025 20:16:14 +0000 (0:00:00.686) 0:00:29.937 ********* 2025-07-12 20:16:55.820786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.820806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.820824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.820841 | orchestrator | 2025-07-12 20:16:55.820859 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-12 20:16:55.820871 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:01.844) 0:00:31.782 ********* 2025-07-12 20:16:55.820883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.820901 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:55.820912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.820924 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:55.820941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.820973 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:55.820985 | orchestrator | 2025-07-12 20:16:55.820996 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-12 20:16:55.821006 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.616) 0:00:32.399 ********* 2025-07-12 20:16:55.821023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.821034 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:55.821052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.821064 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:55.821075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.821086 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:55.821097 | orchestrator | 2025-07-12 20:16:55.821108 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-12 20:16:55.821118 | orchestrator | Saturday 12 July 2025 20:16:17 +0000 (0:00:00.630) 0:00:33.029 ********* 2025-07-12 20:16:55.821135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821181 | orchestrator | 2025-07-12 20:16:55.821192 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-12 20:16:55.821203 | orchestrator | Saturday 12 July 2025 20:16:18 +0000 (0:00:01.348) 0:00:34.378 ********* 2025-07-12 20:16:55.821214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821255 | orchestrator | 2025-07-12 20:16:55.821271 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-12 20:16:55.821288 | orchestrator | Saturday 12 July 2025 20:16:21 +0000 (0:00:02.384) 0:00:36.762 ********* 2025-07-12 20:16:55.821299 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 20:16:55.821310 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 20:16:55.821320 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 20:16:55.821331 | orchestrator | 2025-07-12 20:16:55.821342 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-12 20:16:55.821353 | orchestrator | Saturday 12 July 2025 20:16:22 +0000 (0:00:01.513) 0:00:38.276 ********* 2025-07-12 20:16:55.821363 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:55.821374 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:55.821385 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:55.821395 | orchestrator | 2025-07-12 20:16:55.821406 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-12 20:16:55.821417 | orchestrator | Saturday 12 July 2025 20:16:24 +0000 (0:00:02.063) 0:00:40.339 ********* 2025-07-12 20:16:55.821428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.821439 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:55.821450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.821461 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:55.821478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:16:55.821496 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:55.821507 | orchestrator | 2025-07-12 20:16:55.821518 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-12 20:16:55.821529 | orchestrator | Saturday 12 July 2025 20:16:25 +0000 (0:00:01.057) 0:00:41.397 ********* 2025-07-12 20:16:55.821550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:16:55.821585 | orchestrator | 2025-07-12 20:16:55.821596 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-12 20:16:55.821606 | orchestrator | Saturday 12 July 2025 20:16:28 +0000 (0:00:02.392) 0:00:43.789 ********* 2025-07-12 20:16:55.821617 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:55.821628 | orchestrator | 2025-07-12 20:16:55.821638 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-12 20:16:55.821649 | orchestrator | Saturday 12 July 2025 20:16:30 +0000 (0:00:02.511) 0:00:46.301 ********* 2025-07-12 20:16:55.821660 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:55.821671 | orchestrator | 2025-07-12 20:16:55.821681 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-12 20:16:55.821692 | orchestrator | Saturday 12 July 2025 20:16:33 +0000 (0:00:02.405) 0:00:48.706 ********* 2025-07-12 20:16:55.821703 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:55.821719 | orchestrator | 2025-07-12 20:16:55.821730 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 20:16:55.821741 | orchestrator | Saturday 12 July 2025 20:16:45 +0000 (0:00:12.602) 0:01:01.308 ********* 2025-07-12 20:16:55.821751 | orchestrator | 2025-07-12 20:16:55.821762 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 20:16:55.821773 | orchestrator | Saturday 12 July 2025 20:16:45 +0000 (0:00:00.065) 0:01:01.374 ********* 2025-07-12 20:16:55.821783 | orchestrator | 2025-07-12 20:16:55.821799 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 20:16:55.821811 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.080) 0:01:01.455 ********* 2025-07-12 20:16:55.821822 | orchestrator | 2025-07-12 20:16:55.821833 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-12 20:16:55.821843 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.077) 0:01:01.533 ********* 2025-07-12 20:16:55.821854 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:55.821865 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:55.821875 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:55.821886 | orchestrator | 2025-07-12 20:16:55.821897 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:55.821908 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:16:55.821924 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:16:55.821935 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:16:55.821946 | orchestrator | 2025-07-12 20:16:55.821985 | orchestrator | 2025-07-12 20:16:55.821997 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:55.822008 | orchestrator | Saturday 12 July 2025 20:16:52 +0000 (0:00:06.404) 0:01:07.938 ********* 2025-07-12 20:16:55.822066 | orchestrator | =============================================================================== 2025-07-12 20:16:55.822078 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.60s 2025-07-12 20:16:55.822089 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.68s 2025-07-12 20:16:55.822099 | orchestrator | placement : Restart placement-api container ----------------------------- 6.41s 2025-07-12 20:16:55.822110 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.13s 2025-07-12 20:16:55.822121 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.07s 2025-07-12 20:16:55.822131 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.95s 2025-07-12 20:16:55.822142 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.61s 2025-07-12 20:16:55.822153 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.37s 2025-07-12 20:16:55.822164 | orchestrator | placement : Creating placement databases -------------------------------- 2.51s 2025-07-12 20:16:55.822175 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.41s 2025-07-12 20:16:55.822186 | orchestrator | placement : Check placement containers ---------------------------------- 2.39s 2025-07-12 20:16:55.822197 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.38s 2025-07-12 20:16:55.822207 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.06s 2025-07-12 20:16:55.822218 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.84s 2025-07-12 20:16:55.822229 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.51s 2025-07-12 20:16:55.822240 | orchestrator | placement : Copying over config.json files for services ----------------- 1.35s 2025-07-12 20:16:55.822250 | orchestrator | placement : Copying over existing policy file --------------------------- 1.06s 2025-07-12 20:16:55.822269 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.80s 2025-07-12 20:16:55.822280 | orchestrator | placement : include_tasks ----------------------------------------------- 0.69s 2025-07-12 20:16:55.822290 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.63s 2025-07-12 20:16:55.822301 | orchestrator | 2025-07-12 20:16:55 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:16:55.822524 | orchestrator | 2025-07-12 20:16:55 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:55.824131 | orchestrator | 2025-07-12 20:16:55 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:16:55.824172 | orchestrator | 2025-07-12 20:16:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:58.876329 | orchestrator | 2025-07-12 20:16:58 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:16:58.878931 | orchestrator | 2025-07-12 20:16:58 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:16:58.879684 | orchestrator | 2025-07-12 20:16:58 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state STARTED 2025-07-12 20:16:58.882206 | orchestrator | 2025-07-12 20:16:58 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:16:58.882299 | orchestrator | 2025-07-12 20:16:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:01.912463 | orchestrator | 2025-07-12 20:17:01 | INFO  | Task f88eea8e-9422-41f7-a28f-4449a85c3585 is in state STARTED 2025-07-12 20:17:01.912869 | orchestrator | 2025-07-12 20:17:01 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:01.913646 | orchestrator | 2025-07-12 20:17:01 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:01.915462 | orchestrator | 2025-07-12 20:17:01 | INFO  | Task 9d4c93b1-54af-48fb-96c4-3f54843a1ee3 is in state SUCCESS 2025-07-12 20:17:01.916652 | orchestrator | 2025-07-12 20:17:01.916677 | orchestrator | 2025-07-12 20:17:01.916685 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:17:01.916692 | orchestrator | 2025-07-12 20:17:01.916699 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:17:01.916707 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-07-12 20:17:01.916714 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:01.916723 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:01.916730 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:01.916737 | orchestrator | 2025-07-12 20:17:01.916753 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:17:01.916760 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:00.403) 0:00:00.656 ********* 2025-07-12 20:17:01.916767 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-12 20:17:01.916773 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-12 20:17:01.916779 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-12 20:17:01.916785 | orchestrator | 2025-07-12 20:17:01.916792 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-12 20:17:01.916798 | orchestrator | 2025-07-12 20:17:01.916805 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:17:01.916812 | orchestrator | Saturday 12 July 2025 20:13:59 +0000 (0:00:00.880) 0:00:01.536 ********* 2025-07-12 20:17:01.916839 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:17:01.916846 | orchestrator | 2025-07-12 20:17:01.916852 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-12 20:17:01.916871 | orchestrator | Saturday 12 July 2025 20:14:00 +0000 (0:00:01.292) 0:00:02.829 ********* 2025-07-12 20:17:01.916878 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-12 20:17:01.916885 | orchestrator | 2025-07-12 20:17:01.916891 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-12 20:17:01.916897 | orchestrator | Saturday 12 July 2025 20:14:04 +0000 (0:00:03.426) 0:00:06.255 ********* 2025-07-12 20:17:01.916904 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-12 20:17:01.916910 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-12 20:17:01.916916 | orchestrator | 2025-07-12 20:17:01.916922 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-12 20:17:01.916929 | orchestrator | Saturday 12 July 2025 20:14:10 +0000 (0:00:06.333) 0:00:12.589 ********* 2025-07-12 20:17:01.916935 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:17:01.916956 | orchestrator | 2025-07-12 20:17:01.916963 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-12 20:17:01.916970 | orchestrator | Saturday 12 July 2025 20:14:13 +0000 (0:00:03.266) 0:00:15.855 ********* 2025-07-12 20:17:01.916976 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:17:01.916983 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-12 20:17:01.916989 | orchestrator | 2025-07-12 20:17:01.916995 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-12 20:17:01.917002 | orchestrator | Saturday 12 July 2025 20:14:18 +0000 (0:00:04.219) 0:00:20.074 ********* 2025-07-12 20:17:01.917008 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:17:01.917014 | orchestrator | 2025-07-12 20:17:01.917021 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-12 20:17:01.917027 | orchestrator | Saturday 12 July 2025 20:14:22 +0000 (0:00:03.861) 0:00:23.935 ********* 2025-07-12 20:17:01.917034 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-12 20:17:01.917039 | orchestrator | 2025-07-12 20:17:01.917043 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-12 20:17:01.917047 | orchestrator | Saturday 12 July 2025 20:14:26 +0000 (0:00:04.461) 0:00:28.396 ********* 2025-07-12 20:17:01.917053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917164 | orchestrator | 2025-07-12 20:17:01.917168 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-12 20:17:01.917172 | orchestrator | Saturday 12 July 2025 20:14:30 +0000 (0:00:04.206) 0:00:32.603 ********* 2025-07-12 20:17:01.917176 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.917180 | orchestrator | 2025-07-12 20:17:01.917184 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-12 20:17:01.917187 | orchestrator | Saturday 12 July 2025 20:14:30 +0000 (0:00:00.164) 0:00:32.768 ********* 2025-07-12 20:17:01.917191 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.917195 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:01.917199 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:01.917202 | orchestrator | 2025-07-12 20:17:01.917206 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:17:01.917210 | orchestrator | Saturday 12 July 2025 20:14:31 +0000 (0:00:00.251) 0:00:33.019 ********* 2025-07-12 20:17:01.917214 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:17:01.917218 | orchestrator | 2025-07-12 20:17:01.917222 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-12 20:17:01.917225 | orchestrator | Saturday 12 July 2025 20:14:31 +0000 (0:00:00.587) 0:00:33.607 ********* 2025-07-12 20:17:01.917229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917322 | orchestrator | 2025-07-12 20:17:01.917326 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-12 20:17:01.917330 | orchestrator | Saturday 12 July 2025 20:14:39 +0000 (0:00:07.383) 0:00:40.991 ********* 2025-07-12 20:17:01.917334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.917343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.917352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917368 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.917372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.917380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.917386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:01.917408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.917414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.917422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917441 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:01.917447 | orchestrator | 2025-07-12 20:17:01.917453 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-12 20:17:01.917462 | orchestrator | Saturday 12 July 2025 20:14:41 +0000 (0:00:02.514) 0:00:43.505 ********* 2025-07-12 20:17:01.917474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.917480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.917493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.917500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.917506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917572 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.917578 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:01.917585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.917591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.917598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.917636 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:01.917643 | orchestrator | 2025-07-12 20:17:01.917649 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-12 20:17:01.917655 | orchestrator | Saturday 12 July 2025 20:14:43 +0000 (0:00:01.821) 0:00:45.326 ********* 2025-07-12 20:17:01.917662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.917689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.917814 | orchestrator | 2025-07-12 20:17:01.917820 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-12 20:17:01.917826 | orchestrator | Saturday 12 July 2025 20:14:50 +0000 (0:00:07.058) 0:00:52.385 ********* 2025-07-12 20:17:01.918422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.918516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.918543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.918575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.918846 | orchestrator | 2025-07-12 20:17:01.918858 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-12 20:17:01.918871 | orchestrator | Saturday 12 July 2025 20:15:09 +0000 (0:00:19.098) 0:01:11.483 ********* 2025-07-12 20:17:01.918882 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 20:17:01.918893 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 20:17:01.918904 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 20:17:01.918914 | orchestrator | 2025-07-12 20:17:01.918925 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-12 20:17:01.918936 | orchestrator | Saturday 12 July 2025 20:15:14 +0000 (0:00:04.500) 0:01:15.983 ********* 2025-07-12 20:17:01.919012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 20:17:01.919027 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 20:17:01.919040 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 20:17:01.919053 | orchestrator | 2025-07-12 20:17:01.919066 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-12 20:17:01.919087 | orchestrator | Saturday 12 July 2025 20:15:16 +0000 (0:00:02.503) 0:01:18.486 ********* 2025-07-12 20:17:01.919101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919400 | orchestrator | 2025-07-12 20:17:01.919411 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-12 20:17:01.919423 | orchestrator | Saturday 12 July 2025 20:15:19 +0000 (0:00:03.015) 0:01:21.502 ********* 2025-07-12 20:17:01.919442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.919703 | orchestrator | 2025-07-12 20:17:01.919714 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:17:01.919726 | orchestrator | Saturday 12 July 2025 20:15:22 +0000 (0:00:02.825) 0:01:24.327 ********* 2025-07-12 20:17:01.919737 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.919748 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:01.919759 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:01.919770 | orchestrator | 2025-07-12 20:17:01.919781 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-12 20:17:01.919792 | orchestrator | Saturday 12 July 2025 20:15:22 +0000 (0:00:00.494) 0:01:24.822 ********* 2025-07-12 20:17:01.919809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.919841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919893 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:01.919910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.919923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.919961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.919988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.920004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.920016 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:01.920028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:17:01.920046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:17:01.920065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.920077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.920089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.920105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:17:01.920117 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.920128 | orchestrator | 2025-07-12 20:17:01.920140 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-12 20:17:01.920151 | orchestrator | Saturday 12 July 2025 20:15:24 +0000 (0:00:01.472) 0:01:26.295 ********* 2025-07-12 20:17:01.920163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.920181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.920200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:17:01.920212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:17:01.920427 | orchestrator | 2025-07-12 20:17:01.920439 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:17:01.920450 | orchestrator | Saturday 12 July 2025 20:15:29 +0000 (0:00:05.187) 0:01:31.482 ********* 2025-07-12 20:17:01.920462 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:01.920473 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:01.920485 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:01.920496 | orchestrator | 2025-07-12 20:17:01.920513 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-12 20:17:01.920525 | orchestrator | Saturday 12 July 2025 20:15:30 +0000 (0:00:00.588) 0:01:32.070 ********* 2025-07-12 20:17:01.920536 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-12 20:17:01.920547 | orchestrator | 2025-07-12 20:17:01.920558 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-12 20:17:01.920569 | orchestrator | Saturday 12 July 2025 20:15:33 +0000 (0:00:03.312) 0:01:35.383 ********* 2025-07-12 20:17:01.920580 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:17:01.920591 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-12 20:17:01.920610 | orchestrator | 2025-07-12 20:17:01.920621 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-12 20:17:01.920632 | orchestrator | Saturday 12 July 2025 20:15:35 +0000 (0:00:02.462) 0:01:37.846 ********* 2025-07-12 20:17:01.920643 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.920654 | orchestrator | 2025-07-12 20:17:01.920666 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 20:17:01.920677 | orchestrator | Saturday 12 July 2025 20:15:55 +0000 (0:00:20.028) 0:01:57.874 ********* 2025-07-12 20:17:01.920688 | orchestrator | 2025-07-12 20:17:01.920699 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 20:17:01.920710 | orchestrator | Saturday 12 July 2025 20:15:56 +0000 (0:00:00.083) 0:01:57.958 ********* 2025-07-12 20:17:01.920721 | orchestrator | 2025-07-12 20:17:01.921070 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 20:17:01.921091 | orchestrator | Saturday 12 July 2025 20:15:56 +0000 (0:00:00.070) 0:01:58.029 ********* 2025-07-12 20:17:01.921102 | orchestrator | 2025-07-12 20:17:01.921115 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-12 20:17:01.921134 | orchestrator | Saturday 12 July 2025 20:15:56 +0000 (0:00:00.068) 0:01:58.097 ********* 2025-07-12 20:17:01.921146 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921157 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:01.921168 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:01.921180 | orchestrator | 2025-07-12 20:17:01.921191 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-12 20:17:01.921202 | orchestrator | Saturday 12 July 2025 20:16:07 +0000 (0:00:11.185) 0:02:09.283 ********* 2025-07-12 20:17:01.921213 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921224 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:01.921244 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:01.921262 | orchestrator | 2025-07-12 20:17:01.921280 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-12 20:17:01.921300 | orchestrator | Saturday 12 July 2025 20:16:13 +0000 (0:00:06.065) 0:02:15.348 ********* 2025-07-12 20:17:01.921319 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:01.921338 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:01.921359 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921379 | orchestrator | 2025-07-12 20:17:01.921397 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-12 20:17:01.921409 | orchestrator | Saturday 12 July 2025 20:16:22 +0000 (0:00:08.993) 0:02:24.342 ********* 2025-07-12 20:17:01.921431 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:01.921449 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:01.921468 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921486 | orchestrator | 2025-07-12 20:17:01.921502 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-12 20:17:01.921520 | orchestrator | Saturday 12 July 2025 20:16:33 +0000 (0:00:11.226) 0:02:35.569 ********* 2025-07-12 20:17:01.921539 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921560 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:01.921581 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:01.921600 | orchestrator | 2025-07-12 20:17:01.921620 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-12 20:17:01.921810 | orchestrator | Saturday 12 July 2025 20:16:44 +0000 (0:00:10.635) 0:02:46.204 ********* 2025-07-12 20:17:01.921825 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921838 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:01.921850 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:01.921863 | orchestrator | 2025-07-12 20:17:01.921875 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-12 20:17:01.921887 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:07.126) 0:02:53.330 ********* 2025-07-12 20:17:01.921900 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:01.921912 | orchestrator | 2025-07-12 20:17:01.921940 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:17:01.921971 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:17:01.921984 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:17:01.921995 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:17:01.922006 | orchestrator | 2025-07-12 20:17:01.922054 | orchestrator | 2025-07-12 20:17:01.922068 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:17:01.922079 | orchestrator | Saturday 12 July 2025 20:16:58 +0000 (0:00:07.123) 0:03:00.454 ********* 2025-07-12 20:17:01.922090 | orchestrator | =============================================================================== 2025-07-12 20:17:01.922101 | orchestrator | designate : Running Designate bootstrap container ---------------------- 20.03s 2025-07-12 20:17:01.922112 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.10s 2025-07-12 20:17:01.922123 | orchestrator | designate : Restart designate-producer container ----------------------- 11.23s 2025-07-12 20:17:01.922143 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.19s 2025-07-12 20:17:01.922154 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.63s 2025-07-12 20:17:01.922165 | orchestrator | designate : Restart designate-central container ------------------------- 8.99s 2025-07-12 20:17:01.922176 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.38s 2025-07-12 20:17:01.922187 | orchestrator | designate : Restart designate-worker container -------------------------- 7.13s 2025-07-12 20:17:01.922198 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.12s 2025-07-12 20:17:01.922208 | orchestrator | designate : Copying over config.json files for services ----------------- 7.06s 2025-07-12 20:17:01.922219 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.33s 2025-07-12 20:17:01.922230 | orchestrator | designate : Restart designate-api container ----------------------------- 6.07s 2025-07-12 20:17:01.922241 | orchestrator | designate : Check designate containers ---------------------------------- 5.19s 2025-07-12 20:17:01.922252 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 4.50s 2025-07-12 20:17:01.922263 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.46s 2025-07-12 20:17:01.922274 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.22s 2025-07-12 20:17:01.922285 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.21s 2025-07-12 20:17:01.922296 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.86s 2025-07-12 20:17:01.922308 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.43s 2025-07-12 20:17:01.922319 | orchestrator | designate : Creating Designate databases -------------------------------- 3.31s 2025-07-12 20:17:01.922338 | orchestrator | 2025-07-12 20:17:01 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:01.922350 | orchestrator | 2025-07-12 20:17:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:04.944928 | orchestrator | 2025-07-12 20:17:04 | INFO  | Task f88eea8e-9422-41f7-a28f-4449a85c3585 is in state STARTED 2025-07-12 20:17:04.945351 | orchestrator | 2025-07-12 20:17:04 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:04.947109 | orchestrator | 2025-07-12 20:17:04 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:04.947902 | orchestrator | 2025-07-12 20:17:04 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:04.948009 | orchestrator | 2025-07-12 20:17:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:07.991850 | orchestrator | 2025-07-12 20:17:07 | INFO  | Task f88eea8e-9422-41f7-a28f-4449a85c3585 is in state SUCCESS 2025-07-12 20:17:07.992367 | orchestrator | 2025-07-12 20:17:07 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:07.994593 | orchestrator | 2025-07-12 20:17:07 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:07.994668 | orchestrator | 2025-07-12 20:17:07 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:07.997056 | orchestrator | 2025-07-12 20:17:07 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:07.997109 | orchestrator | 2025-07-12 20:17:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:11.048921 | orchestrator | 2025-07-12 20:17:11 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:11.051365 | orchestrator | 2025-07-12 20:17:11 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:11.051744 | orchestrator | 2025-07-12 20:17:11 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:11.052581 | orchestrator | 2025-07-12 20:17:11 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:11.052595 | orchestrator | 2025-07-12 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:14.093144 | orchestrator | 2025-07-12 20:17:14 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:14.093894 | orchestrator | 2025-07-12 20:17:14 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:14.095096 | orchestrator | 2025-07-12 20:17:14 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:14.096322 | orchestrator | 2025-07-12 20:17:14 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:14.096354 | orchestrator | 2025-07-12 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:17.138276 | orchestrator | 2025-07-12 20:17:17 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:17.139825 | orchestrator | 2025-07-12 20:17:17 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:17.142207 | orchestrator | 2025-07-12 20:17:17 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:17.144454 | orchestrator | 2025-07-12 20:17:17 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:17.144480 | orchestrator | 2025-07-12 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:20.179053 | orchestrator | 2025-07-12 20:17:20 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:20.180123 | orchestrator | 2025-07-12 20:17:20 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:20.181097 | orchestrator | 2025-07-12 20:17:20 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:20.183068 | orchestrator | 2025-07-12 20:17:20 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:20.183420 | orchestrator | 2025-07-12 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:23.210653 | orchestrator | 2025-07-12 20:17:23 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:23.212013 | orchestrator | 2025-07-12 20:17:23 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:23.213388 | orchestrator | 2025-07-12 20:17:23 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:23.215355 | orchestrator | 2025-07-12 20:17:23 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:23.215384 | orchestrator | 2025-07-12 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:26.256589 | orchestrator | 2025-07-12 20:17:26 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:26.256679 | orchestrator | 2025-07-12 20:17:26 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:26.257251 | orchestrator | 2025-07-12 20:17:26 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:26.257479 | orchestrator | 2025-07-12 20:17:26 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:26.257830 | orchestrator | 2025-07-12 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:29.296408 | orchestrator | 2025-07-12 20:17:29 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:29.297897 | orchestrator | 2025-07-12 20:17:29 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:29.299962 | orchestrator | 2025-07-12 20:17:29 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:29.303278 | orchestrator | 2025-07-12 20:17:29 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:29.304595 | orchestrator | 2025-07-12 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:32.339432 | orchestrator | 2025-07-12 20:17:32 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:32.341362 | orchestrator | 2025-07-12 20:17:32 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:32.342779 | orchestrator | 2025-07-12 20:17:32 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:32.344683 | orchestrator | 2025-07-12 20:17:32 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:32.344758 | orchestrator | 2025-07-12 20:17:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:35.390770 | orchestrator | 2025-07-12 20:17:35 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:35.391609 | orchestrator | 2025-07-12 20:17:35 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:35.393886 | orchestrator | 2025-07-12 20:17:35 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:35.396152 | orchestrator | 2025-07-12 20:17:35 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:35.396196 | orchestrator | 2025-07-12 20:17:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:38.438575 | orchestrator | 2025-07-12 20:17:38 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:38.440164 | orchestrator | 2025-07-12 20:17:38 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:38.441798 | orchestrator | 2025-07-12 20:17:38 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:38.443318 | orchestrator | 2025-07-12 20:17:38 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:38.443487 | orchestrator | 2025-07-12 20:17:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:41.477694 | orchestrator | 2025-07-12 20:17:41 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:41.480068 | orchestrator | 2025-07-12 20:17:41 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:41.483071 | orchestrator | 2025-07-12 20:17:41 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:41.483675 | orchestrator | 2025-07-12 20:17:41 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:41.483703 | orchestrator | 2025-07-12 20:17:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:44.517154 | orchestrator | 2025-07-12 20:17:44 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:44.517451 | orchestrator | 2025-07-12 20:17:44 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:44.518354 | orchestrator | 2025-07-12 20:17:44 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:44.518860 | orchestrator | 2025-07-12 20:17:44 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:44.519097 | orchestrator | 2025-07-12 20:17:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:47.548356 | orchestrator | 2025-07-12 20:17:47 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:47.548760 | orchestrator | 2025-07-12 20:17:47 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:47.549062 | orchestrator | 2025-07-12 20:17:47 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:47.552642 | orchestrator | 2025-07-12 20:17:47 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:47.552683 | orchestrator | 2025-07-12 20:17:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:50.589345 | orchestrator | 2025-07-12 20:17:50 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:50.590184 | orchestrator | 2025-07-12 20:17:50 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:50.591500 | orchestrator | 2025-07-12 20:17:50 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:50.592927 | orchestrator | 2025-07-12 20:17:50 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:50.592965 | orchestrator | 2025-07-12 20:17:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:53.638438 | orchestrator | 2025-07-12 20:17:53 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:53.642137 | orchestrator | 2025-07-12 20:17:53 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:53.643625 | orchestrator | 2025-07-12 20:17:53 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:53.645388 | orchestrator | 2025-07-12 20:17:53 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:53.645436 | orchestrator | 2025-07-12 20:17:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:56.694635 | orchestrator | 2025-07-12 20:17:56 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:56.694737 | orchestrator | 2025-07-12 20:17:56 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:56.697957 | orchestrator | 2025-07-12 20:17:56 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:56.704825 | orchestrator | 2025-07-12 20:17:56 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:56.704910 | orchestrator | 2025-07-12 20:17:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:59.750484 | orchestrator | 2025-07-12 20:17:59 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:17:59.752452 | orchestrator | 2025-07-12 20:17:59 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:17:59.755103 | orchestrator | 2025-07-12 20:17:59 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:17:59.756478 | orchestrator | 2025-07-12 20:17:59 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:17:59.756506 | orchestrator | 2025-07-12 20:17:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:02.797766 | orchestrator | 2025-07-12 20:18:02 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:02.799331 | orchestrator | 2025-07-12 20:18:02 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:02.801273 | orchestrator | 2025-07-12 20:18:02 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:02.802746 | orchestrator | 2025-07-12 20:18:02 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:02.802774 | orchestrator | 2025-07-12 20:18:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:05.838483 | orchestrator | 2025-07-12 20:18:05 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:05.839981 | orchestrator | 2025-07-12 20:18:05 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:05.840968 | orchestrator | 2025-07-12 20:18:05 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:05.842304 | orchestrator | 2025-07-12 20:18:05 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:05.842901 | orchestrator | 2025-07-12 20:18:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:08.875923 | orchestrator | 2025-07-12 20:18:08 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:08.876863 | orchestrator | 2025-07-12 20:18:08 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:08.879385 | orchestrator | 2025-07-12 20:18:08 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:08.881945 | orchestrator | 2025-07-12 20:18:08 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:08.882463 | orchestrator | 2025-07-12 20:18:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:11.924110 | orchestrator | 2025-07-12 20:18:11 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:11.927282 | orchestrator | 2025-07-12 20:18:11 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:11.930217 | orchestrator | 2025-07-12 20:18:11 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:11.932574 | orchestrator | 2025-07-12 20:18:11 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:11.932711 | orchestrator | 2025-07-12 20:18:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:14.982312 | orchestrator | 2025-07-12 20:18:14 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:14.983479 | orchestrator | 2025-07-12 20:18:14 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:14.984055 | orchestrator | 2025-07-12 20:18:14 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:14.985085 | orchestrator | 2025-07-12 20:18:14 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:14.985283 | orchestrator | 2025-07-12 20:18:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:18.039106 | orchestrator | 2025-07-12 20:18:18 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:18.040643 | orchestrator | 2025-07-12 20:18:18 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:18.042369 | orchestrator | 2025-07-12 20:18:18 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:18.044059 | orchestrator | 2025-07-12 20:18:18 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:18.044092 | orchestrator | 2025-07-12 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:21.085321 | orchestrator | 2025-07-12 20:18:21 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:21.087780 | orchestrator | 2025-07-12 20:18:21 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:21.089924 | orchestrator | 2025-07-12 20:18:21 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:21.092309 | orchestrator | 2025-07-12 20:18:21 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:21.092339 | orchestrator | 2025-07-12 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:24.134417 | orchestrator | 2025-07-12 20:18:24 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:24.135123 | orchestrator | 2025-07-12 20:18:24 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:24.136895 | orchestrator | 2025-07-12 20:18:24 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:24.138448 | orchestrator | 2025-07-12 20:18:24 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:24.138516 | orchestrator | 2025-07-12 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:27.183310 | orchestrator | 2025-07-12 20:18:27 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:27.183886 | orchestrator | 2025-07-12 20:18:27 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:27.187679 | orchestrator | 2025-07-12 20:18:27 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:27.188324 | orchestrator | 2025-07-12 20:18:27 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:27.188528 | orchestrator | 2025-07-12 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:30.223966 | orchestrator | 2025-07-12 20:18:30 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:30.224400 | orchestrator | 2025-07-12 20:18:30 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:30.224861 | orchestrator | 2025-07-12 20:18:30 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:30.226326 | orchestrator | 2025-07-12 20:18:30 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:30.226356 | orchestrator | 2025-07-12 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:33.253506 | orchestrator | 2025-07-12 20:18:33 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:33.257413 | orchestrator | 2025-07-12 20:18:33 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:33.258066 | orchestrator | 2025-07-12 20:18:33 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:33.258705 | orchestrator | 2025-07-12 20:18:33 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:33.258728 | orchestrator | 2025-07-12 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:36.296534 | orchestrator | 2025-07-12 20:18:36 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:36.296766 | orchestrator | 2025-07-12 20:18:36 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:36.297524 | orchestrator | 2025-07-12 20:18:36 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:36.298144 | orchestrator | 2025-07-12 20:18:36 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:36.298204 | orchestrator | 2025-07-12 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:39.322153 | orchestrator | 2025-07-12 20:18:39 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:39.323368 | orchestrator | 2025-07-12 20:18:39 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:39.325085 | orchestrator | 2025-07-12 20:18:39 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:39.325681 | orchestrator | 2025-07-12 20:18:39 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:39.325710 | orchestrator | 2025-07-12 20:18:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:42.355833 | orchestrator | 2025-07-12 20:18:42 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:42.358708 | orchestrator | 2025-07-12 20:18:42 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:42.362684 | orchestrator | 2025-07-12 20:18:42 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:42.364378 | orchestrator | 2025-07-12 20:18:42 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:42.364840 | orchestrator | 2025-07-12 20:18:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:45.416246 | orchestrator | 2025-07-12 20:18:45 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:45.417225 | orchestrator | 2025-07-12 20:18:45 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:45.418921 | orchestrator | 2025-07-12 20:18:45 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:45.419731 | orchestrator | 2025-07-12 20:18:45 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state STARTED 2025-07-12 20:18:45.419848 | orchestrator | 2025-07-12 20:18:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:48.464054 | orchestrator | 2025-07-12 20:18:48 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:48.465822 | orchestrator | 2025-07-12 20:18:48 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:48.467153 | orchestrator | 2025-07-12 20:18:48 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:48.469453 | orchestrator | 2025-07-12 20:18:48 | INFO  | Task 33d0b16c-cd58-432e-be7f-8b3ad800588c is in state SUCCESS 2025-07-12 20:18:48.469665 | orchestrator | 2025-07-12 20:18:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:48.472042 | orchestrator | 2025-07-12 20:18:48.472121 | orchestrator | 2025-07-12 20:18:48.472135 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:18:48.472147 | orchestrator | 2025-07-12 20:18:48.472158 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:18:48.472169 | orchestrator | Saturday 12 July 2025 20:17:03 +0000 (0:00:00.272) 0:00:00.272 ********* 2025-07-12 20:18:48.472180 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:18:48.472192 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:18:48.472203 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:18:48.472226 | orchestrator | 2025-07-12 20:18:48.472238 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:18:48.472249 | orchestrator | Saturday 12 July 2025 20:17:03 +0000 (0:00:00.579) 0:00:00.851 ********* 2025-07-12 20:18:48.472260 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 20:18:48.472272 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 20:18:48.472283 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 20:18:48.472294 | orchestrator | 2025-07-12 20:18:48.472305 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-12 20:18:48.472316 | orchestrator | 2025-07-12 20:18:48.472326 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-12 20:18:48.472337 | orchestrator | Saturday 12 July 2025 20:17:04 +0000 (0:00:00.874) 0:00:01.725 ********* 2025-07-12 20:18:48.472348 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:18:48.472361 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:18:48.472372 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:18:48.472382 | orchestrator | 2025-07-12 20:18:48.472394 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:18:48.472405 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:18:48.472418 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:18:48.472429 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:18:48.472440 | orchestrator | 2025-07-12 20:18:48.472451 | orchestrator | 2025-07-12 20:18:48.472462 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:18:48.472473 | orchestrator | Saturday 12 July 2025 20:17:05 +0000 (0:00:00.898) 0:00:02.624 ********* 2025-07-12 20:18:48.472484 | orchestrator | =============================================================================== 2025-07-12 20:18:48.472494 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.90s 2025-07-12 20:18:48.472505 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.87s 2025-07-12 20:18:48.472516 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.58s 2025-07-12 20:18:48.472527 | orchestrator | 2025-07-12 20:18:48.472538 | orchestrator | 2025-07-12 20:18:48.472549 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:18:48.472559 | orchestrator | 2025-07-12 20:18:48.472570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:18:48.472581 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:00.236) 0:00:00.236 ********* 2025-07-12 20:18:48.472592 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:18:48.472602 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:18:48.472613 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:18:48.472624 | orchestrator | 2025-07-12 20:18:48.472637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:18:48.472649 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:00.289) 0:00:00.526 ********* 2025-07-12 20:18:48.472678 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-12 20:18:48.472691 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-12 20:18:48.472704 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-12 20:18:48.472716 | orchestrator | 2025-07-12 20:18:48.472742 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-12 20:18:48.472754 | orchestrator | 2025-07-12 20:18:48.472767 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 20:18:48.472779 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:00.437) 0:00:00.964 ********* 2025-07-12 20:18:48.472830 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:18:48.472843 | orchestrator | 2025-07-12 20:18:48.472855 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-12 20:18:48.472868 | orchestrator | Saturday 12 July 2025 20:16:52 +0000 (0:00:00.528) 0:00:01.493 ********* 2025-07-12 20:18:48.472881 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-12 20:18:48.472893 | orchestrator | 2025-07-12 20:18:48.472905 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-12 20:18:48.472917 | orchestrator | Saturday 12 July 2025 20:16:55 +0000 (0:00:03.235) 0:00:04.728 ********* 2025-07-12 20:18:48.472929 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-12 20:18:48.472941 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-12 20:18:48.472954 | orchestrator | 2025-07-12 20:18:48.472966 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-12 20:18:48.472979 | orchestrator | Saturday 12 July 2025 20:17:02 +0000 (0:00:06.959) 0:00:11.688 ********* 2025-07-12 20:18:48.472991 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:18:48.473002 | orchestrator | 2025-07-12 20:18:48.473013 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-12 20:18:48.473024 | orchestrator | Saturday 12 July 2025 20:17:06 +0000 (0:00:03.622) 0:00:15.310 ********* 2025-07-12 20:18:48.473049 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:18:48.473060 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-12 20:18:48.473071 | orchestrator | 2025-07-12 20:18:48.473082 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-12 20:18:48.473093 | orchestrator | Saturday 12 July 2025 20:17:10 +0000 (0:00:04.161) 0:00:19.471 ********* 2025-07-12 20:18:48.473104 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:18:48.473114 | orchestrator | 2025-07-12 20:18:48.473125 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-12 20:18:48.473135 | orchestrator | Saturday 12 July 2025 20:17:13 +0000 (0:00:02.870) 0:00:22.342 ********* 2025-07-12 20:18:48.473146 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-12 20:18:48.473156 | orchestrator | 2025-07-12 20:18:48.473167 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-12 20:18:48.473178 | orchestrator | Saturday 12 July 2025 20:17:17 +0000 (0:00:03.859) 0:00:26.201 ********* 2025-07-12 20:18:48.473189 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.473199 | orchestrator | 2025-07-12 20:18:48.473210 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-12 20:18:48.473221 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:03.321) 0:00:29.523 ********* 2025-07-12 20:18:48.473231 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.473242 | orchestrator | 2025-07-12 20:18:48.473252 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-12 20:18:48.473263 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:03.915) 0:00:33.438 ********* 2025-07-12 20:18:48.473274 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.473293 | orchestrator | 2025-07-12 20:18:48.473304 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-12 20:18:48.473314 | orchestrator | Saturday 12 July 2025 20:17:28 +0000 (0:00:03.705) 0:00:37.144 ********* 2025-07-12 20:18:48.473329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473425 | orchestrator | 2025-07-12 20:18:48.473436 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-12 20:18:48.473447 | orchestrator | Saturday 12 July 2025 20:17:29 +0000 (0:00:01.324) 0:00:38.468 ********* 2025-07-12 20:18:48.473458 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:18:48.473468 | orchestrator | 2025-07-12 20:18:48.473479 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-12 20:18:48.473490 | orchestrator | Saturday 12 July 2025 20:17:29 +0000 (0:00:00.105) 0:00:38.574 ********* 2025-07-12 20:18:48.473500 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:18:48.473511 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:18:48.473522 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:18:48.473532 | orchestrator | 2025-07-12 20:18:48.473543 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-12 20:18:48.473558 | orchestrator | Saturday 12 July 2025 20:17:29 +0000 (0:00:00.393) 0:00:38.967 ********* 2025-07-12 20:18:48.473569 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:18:48.473580 | orchestrator | 2025-07-12 20:18:48.473591 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-12 20:18:48.473601 | orchestrator | Saturday 12 July 2025 20:17:30 +0000 (0:00:00.783) 0:00:39.751 ********* 2025-07-12 20:18:48.473612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473702 | orchestrator | 2025-07-12 20:18:48.473713 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-12 20:18:48.473723 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:02.243) 0:00:41.994 ********* 2025-07-12 20:18:48.473734 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:18:48.473745 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:18:48.473756 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:18:48.473766 | orchestrator | 2025-07-12 20:18:48.473777 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 20:18:48.473834 | orchestrator | Saturday 12 July 2025 20:17:33 +0000 (0:00:00.257) 0:00:42.251 ********* 2025-07-12 20:18:48.473855 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:18:48.473872 | orchestrator | 2025-07-12 20:18:48.473891 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-12 20:18:48.473902 | orchestrator | Saturday 12 July 2025 20:17:33 +0000 (0:00:00.678) 0:00:42.930 ********* 2025-07-12 20:18:48.473914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.473954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.473993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.474005 | orchestrator | 2025-07-12 20:18:48.474067 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-12 20:18:48.474082 | orchestrator | Saturday 12 July 2025 20:17:36 +0000 (0:00:02.380) 0:00:45.311 ********* 2025-07-12 20:18:48.474094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.474111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.474122 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:18:48.474134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.474173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.474185 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:18:48.474197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.474209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.474220 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:18:48.474231 | orchestrator | 2025-07-12 20:18:48.474242 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-12 20:18:48.474254 | orchestrator | Saturday 12 July 2025 20:17:36 +0000 (0:00:00.559) 0:00:45.870 ********* 2025-07-12 20:18:48.474276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.474295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.474306 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:18:48.474325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.474337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.474348 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:18:48.474359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.474375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.474393 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:18:48.474404 | orchestrator | 2025-07-12 20:18:48.474415 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-12 20:18:48.474426 | orchestrator | Saturday 12 July 2025 20:17:37 +0000 (0:00:01.001) 0:00:46.872 ********* 2025-07-12 20:18:48.474632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.474738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.474754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.474769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.474872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.474915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.474929 | orchestrator | 2025-07-12 20:18:48.474942 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-12 20:18:48.474955 | orchestrator | Saturday 12 July 2025 20:17:40 +0000 (0:00:02.342) 0:00:49.215 ********* 2025-07-12 20:18:48.475015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.475030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.475047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.475068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.475089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.475101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.475113 | orchestrator | 2025-07-12 20:18:48.475125 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-12 20:18:48.475138 | orchestrator | Saturday 12 July 2025 20:17:46 +0000 (0:00:05.910) 0:00:55.125 ********* 2025-07-12 20:18:48.475152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.475170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.475190 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:18:48.475204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.475226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.475240 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:18:48.475253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:18:48.475265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:18:48.475284 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:18:48.475297 | orchestrator | 2025-07-12 20:18:48.475308 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-12 20:18:48.475319 | orchestrator | Saturday 12 July 2025 20:17:47 +0000 (0:00:01.076) 0:00:56.202 ********* 2025-07-12 20:18:48.475336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.475355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.475367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:18:48.475379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.475391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.475414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:18:48.475426 | orchestrator | 2025-07-12 20:18:48.475438 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 20:18:48.475449 | orchestrator | Saturday 12 July 2025 20:17:49 +0000 (0:00:02.487) 0:00:58.689 ********* 2025-07-12 20:18:48.475460 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:18:48.475472 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:18:48.475483 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:18:48.475494 | orchestrator | 2025-07-12 20:18:48.475505 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-12 20:18:48.475516 | orchestrator | Saturday 12 July 2025 20:17:50 +0000 (0:00:00.347) 0:00:59.036 ********* 2025-07-12 20:18:48.475527 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.475538 | orchestrator | 2025-07-12 20:18:48.475549 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-12 20:18:48.475561 | orchestrator | Saturday 12 July 2025 20:17:52 +0000 (0:00:02.323) 0:01:01.360 ********* 2025-07-12 20:18:48.475571 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.475582 | orchestrator | 2025-07-12 20:18:48.475593 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-12 20:18:48.475604 | orchestrator | Saturday 12 July 2025 20:17:54 +0000 (0:00:02.457) 0:01:03.817 ********* 2025-07-12 20:18:48.475622 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.475634 | orchestrator | 2025-07-12 20:18:48.475645 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 20:18:48.475656 | orchestrator | Saturday 12 July 2025 20:18:13 +0000 (0:00:18.979) 0:01:22.797 ********* 2025-07-12 20:18:48.475667 | orchestrator | 2025-07-12 20:18:48.475678 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 20:18:48.475689 | orchestrator | Saturday 12 July 2025 20:18:13 +0000 (0:00:00.068) 0:01:22.866 ********* 2025-07-12 20:18:48.475700 | orchestrator | 2025-07-12 20:18:48.475712 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 20:18:48.475723 | orchestrator | Saturday 12 July 2025 20:18:13 +0000 (0:00:00.064) 0:01:22.930 ********* 2025-07-12 20:18:48.475734 | orchestrator | 2025-07-12 20:18:48.475745 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-12 20:18:48.475756 | orchestrator | Saturday 12 July 2025 20:18:13 +0000 (0:00:00.064) 0:01:22.995 ********* 2025-07-12 20:18:48.475767 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.475777 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:18:48.475818 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:18:48.475830 | orchestrator | 2025-07-12 20:18:48.475841 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-12 20:18:48.475864 | orchestrator | Saturday 12 July 2025 20:18:37 +0000 (0:00:23.266) 0:01:46.261 ********* 2025-07-12 20:18:48.475876 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:18:48.475887 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:18:48.475897 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:18:48.475908 | orchestrator | 2025-07-12 20:18:48.475920 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:18:48.475932 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:18:48.475944 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:18:48.475955 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:18:48.475966 | orchestrator | 2025-07-12 20:18:48.475977 | orchestrator | 2025-07-12 20:18:48.475988 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:18:48.476000 | orchestrator | Saturday 12 July 2025 20:18:47 +0000 (0:00:10.203) 0:01:56.465 ********* 2025-07-12 20:18:48.476010 | orchestrator | =============================================================================== 2025-07-12 20:18:48.476021 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 23.27s 2025-07-12 20:18:48.476033 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.98s 2025-07-12 20:18:48.476044 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.20s 2025-07-12 20:18:48.476055 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.96s 2025-07-12 20:18:48.476066 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.91s 2025-07-12 20:18:48.476078 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.16s 2025-07-12 20:18:48.476089 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.92s 2025-07-12 20:18:48.476100 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.86s 2025-07-12 20:18:48.476111 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.71s 2025-07-12 20:18:48.476122 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.62s 2025-07-12 20:18:48.476138 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.32s 2025-07-12 20:18:48.476150 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.24s 2025-07-12 20:18:48.476161 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.87s 2025-07-12 20:18:48.476172 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.49s 2025-07-12 20:18:48.476183 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.46s 2025-07-12 20:18:48.476194 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.38s 2025-07-12 20:18:48.476205 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.34s 2025-07-12 20:18:48.476216 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.32s 2025-07-12 20:18:48.476227 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.24s 2025-07-12 20:18:48.476239 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.32s 2025-07-12 20:18:51.523975 | orchestrator | 2025-07-12 20:18:51 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:51.526326 | orchestrator | 2025-07-12 20:18:51 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:51.529318 | orchestrator | 2025-07-12 20:18:51 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:51.529343 | orchestrator | 2025-07-12 20:18:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:54.582563 | orchestrator | 2025-07-12 20:18:54 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:54.583538 | orchestrator | 2025-07-12 20:18:54 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:54.585475 | orchestrator | 2025-07-12 20:18:54 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:54.585834 | orchestrator | 2025-07-12 20:18:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:57.642472 | orchestrator | 2025-07-12 20:18:57 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:18:57.645146 | orchestrator | 2025-07-12 20:18:57 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:18:57.646373 | orchestrator | 2025-07-12 20:18:57 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:18:57.646630 | orchestrator | 2025-07-12 20:18:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:00.697343 | orchestrator | 2025-07-12 20:19:00 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:19:00.699454 | orchestrator | 2025-07-12 20:19:00 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:19:00.701242 | orchestrator | 2025-07-12 20:19:00 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:00.701269 | orchestrator | 2025-07-12 20:19:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:03.752026 | orchestrator | 2025-07-12 20:19:03 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:19:03.754075 | orchestrator | 2025-07-12 20:19:03 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:19:03.754851 | orchestrator | 2025-07-12 20:19:03 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:03.754922 | orchestrator | 2025-07-12 20:19:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:06.795257 | orchestrator | 2025-07-12 20:19:06 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:19:06.797799 | orchestrator | 2025-07-12 20:19:06 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:19:06.799891 | orchestrator | 2025-07-12 20:19:06 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:06.799936 | orchestrator | 2025-07-12 20:19:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:09.838853 | orchestrator | 2025-07-12 20:19:09 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:19:09.840681 | orchestrator | 2025-07-12 20:19:09 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state STARTED 2025-07-12 20:19:09.842495 | orchestrator | 2025-07-12 20:19:09 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:09.842799 | orchestrator | 2025-07-12 20:19:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:12.879234 | orchestrator | 2025-07-12 20:19:12 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state STARTED 2025-07-12 20:19:12.883010 | orchestrator | 2025-07-12 20:19:12 | INFO  | Task ab7db2d2-0e86-40bd-a31b-a560b80ac8d9 is in state SUCCESS 2025-07-12 20:19:12.884929 | orchestrator | 2025-07-12 20:19:12.884989 | orchestrator | 2025-07-12 20:19:12.885011 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:19:12.885030 | orchestrator | 2025-07-12 20:19:12.885051 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:19:12.885104 | orchestrator | Saturday 12 July 2025 20:16:56 +0000 (0:00:00.240) 0:00:00.240 ********* 2025-07-12 20:19:12.885127 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:12.885149 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:19:12.885166 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:19:12.885184 | orchestrator | 2025-07-12 20:19:12.885204 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:19:12.885223 | orchestrator | Saturday 12 July 2025 20:16:56 +0000 (0:00:00.258) 0:00:00.498 ********* 2025-07-12 20:19:12.885242 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-12 20:19:12.885305 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-12 20:19:12.885325 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-12 20:19:12.885363 | orchestrator | 2025-07-12 20:19:12.885384 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-12 20:19:12.885403 | orchestrator | 2025-07-12 20:19:12.885422 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 20:19:12.885435 | orchestrator | Saturday 12 July 2025 20:16:57 +0000 (0:00:00.331) 0:00:00.830 ********* 2025-07-12 20:19:12.885447 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:12.885459 | orchestrator | 2025-07-12 20:19:12.885470 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-12 20:19:12.885482 | orchestrator | Saturday 12 July 2025 20:16:57 +0000 (0:00:00.552) 0:00:01.383 ********* 2025-07-12 20:19:12.885537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.885556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.885570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.885583 | orchestrator | 2025-07-12 20:19:12.885595 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-12 20:19:12.885608 | orchestrator | Saturday 12 July 2025 20:16:58 +0000 (0:00:00.755) 0:00:02.138 ********* 2025-07-12 20:19:12.885621 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-12 20:19:12.885635 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-12 20:19:12.885660 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:19:12.885672 | orchestrator | 2025-07-12 20:19:12.885685 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 20:19:12.885698 | orchestrator | Saturday 12 July 2025 20:16:59 +0000 (0:00:01.048) 0:00:03.187 ********* 2025-07-12 20:19:12.885711 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:12.885723 | orchestrator | 2025-07-12 20:19:12.885735 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-12 20:19:12.885773 | orchestrator | Saturday 12 July 2025 20:17:00 +0000 (0:00:00.794) 0:00:03.982 ********* 2025-07-12 20:19:12.885817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.885833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.885846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.885858 | orchestrator | 2025-07-12 20:19:12.885869 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-12 20:19:12.885880 | orchestrator | Saturday 12 July 2025 20:17:02 +0000 (0:00:01.848) 0:00:05.830 ********* 2025-07-12 20:19:12.885891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:19:12.885903 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.885914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:19:12.885934 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.885958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:19:12.885970 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.886151 | orchestrator | 2025-07-12 20:19:12.886165 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-12 20:19:12.886176 | orchestrator | Saturday 12 July 2025 20:17:02 +0000 (0:00:00.496) 0:00:06.327 ********* 2025-07-12 20:19:12.886188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:19:12.886201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:19:12.886213 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.886224 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.886236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:19:12.886247 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.886258 | orchestrator | 2025-07-12 20:19:12.886269 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-12 20:19:12.886280 | orchestrator | Saturday 12 July 2025 20:17:04 +0000 (0:00:01.335) 0:00:07.662 ********* 2025-07-12 20:19:12.886300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.886313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.886340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.886353 | orchestrator | 2025-07-12 20:19:12.886364 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-12 20:19:12.886375 | orchestrator | Saturday 12 July 2025 20:17:05 +0000 (0:00:01.652) 0:00:09.315 ********* 2025-07-12 20:19:12.886387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.886399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.886410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.886429 | orchestrator | 2025-07-12 20:19:12.886440 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-12 20:19:12.886452 | orchestrator | Saturday 12 July 2025 20:17:07 +0000 (0:00:01.525) 0:00:10.841 ********* 2025-07-12 20:19:12.886463 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.886474 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.886485 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.886495 | orchestrator | 2025-07-12 20:19:12.886507 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-12 20:19:12.886518 | orchestrator | Saturday 12 July 2025 20:17:07 +0000 (0:00:00.418) 0:00:11.259 ********* 2025-07-12 20:19:12.886529 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 20:19:12.886541 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 20:19:12.886551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 20:19:12.886562 | orchestrator | 2025-07-12 20:19:12.886573 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-12 20:19:12.886584 | orchestrator | Saturday 12 July 2025 20:17:09 +0000 (0:00:01.406) 0:00:12.666 ********* 2025-07-12 20:19:12.886595 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 20:19:12.886606 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 20:19:12.886617 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 20:19:12.886628 | orchestrator | 2025-07-12 20:19:12.886639 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-12 20:19:12.886659 | orchestrator | Saturday 12 July 2025 20:17:10 +0000 (0:00:01.125) 0:00:13.791 ********* 2025-07-12 20:19:12.886689 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:19:12.886710 | orchestrator | 2025-07-12 20:19:12.886730 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-12 20:19:12.886776 | orchestrator | Saturday 12 July 2025 20:17:11 +0000 (0:00:00.928) 0:00:14.720 ********* 2025-07-12 20:19:12.886797 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-12 20:19:12.886815 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-12 20:19:12.886834 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:12.886853 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:19:12.886872 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:19:12.886890 | orchestrator | 2025-07-12 20:19:12.886909 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-12 20:19:12.886928 | orchestrator | Saturday 12 July 2025 20:17:11 +0000 (0:00:00.608) 0:00:15.328 ********* 2025-07-12 20:19:12.886941 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.886952 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.886963 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.886973 | orchestrator | 2025-07-12 20:19:12.886984 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-12 20:19:12.886995 | orchestrator | Saturday 12 July 2025 20:17:12 +0000 (0:00:00.421) 0:00:15.750 ********* 2025-07-12 20:19:12.887007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1055657, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9019911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887146 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1055657, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9019911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1055657, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9019911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1055713, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9159915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1055713, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9159915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1055713, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9159915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1055667, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9069912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1055667, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9069912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1055667, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9069912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1055718, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9259915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1055718, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9259915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1055718, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9259915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1055687, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9099913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1055687, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9099913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1055687, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9099913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1055704, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9139915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1055704, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9139915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1055704, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9139915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1055655, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9009912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1055655, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9009912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1055655, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9009912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1055660, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9029913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1055660, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9029913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1055660, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9029913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1055676, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9079914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1055676, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9079914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1055676, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9079914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1055695, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9119914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1055695, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9119914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1055695, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9119914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1055709, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9149914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1055709, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9149914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1055709, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9149914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1055665, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9039912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1055665, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9039912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1055665, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9039912, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1055701, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9129913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1055701, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9129913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1055701, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9129913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1055690, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9109914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1055690, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9109914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1055690, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9109914, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1055683, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9099913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1055683, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9099913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.887990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1055683, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9099913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1055681, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9089913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1055681, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9089913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1055681, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9089913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1055699, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9129913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1055699, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9129913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1055699, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9129913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1055678, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9089913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1055678, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9089913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1055678, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9089913, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1055707, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9139915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1055707, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9139915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1055707, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9139915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1055995, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0029929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1055995, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0029929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1055766, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9409919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1055766, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9409919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1055995, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.0029929, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1055753, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9309916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1055753, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9309916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1055766, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9409919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1055789, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.944992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1055789, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.944992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1055753, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9309916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1055744, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9279916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1055744, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9279916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1055789, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.944992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1055892, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9719925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1055892, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9719925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1055744, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9279916, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1055793, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9679923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1055793, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9679923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1055892, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9719925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1055895, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9739923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1055895, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9739923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1055793, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9679923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1055909, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.000993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1055909, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.000993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1055895, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9739923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.888999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1055889, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9709923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1055889, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9709923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1055909, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348510.000993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1055782, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.942992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1055782, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.942992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1055889, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9709923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1055763, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.936992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1055763, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.936992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1055782, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.942992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1055777, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.942992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1055777, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.942992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1055763, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.936992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1055756, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9349918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1055756, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9349918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1055777, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.942992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1055785, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.943992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1055785, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.943992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1055756, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9349918, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1055906, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9789925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1055906, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9789925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1055785, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.943992, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1055902, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9759924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1055902, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9759924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1055906, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9789925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1055746, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9299917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1055746, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9299917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1055902, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9759924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1055749, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9299917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1055749, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9299917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1055746, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9299917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1055887, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9699924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1055887, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9699924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1055749, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9299917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1055898, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9749925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1055898, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9749925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1055887, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9699924, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1055898, 'dev': 86, 'nlink': 1, 'atime': 1752278522.0, 'mtime': 1752278522.0, 'ctime': 1752348509.9749925, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:19:12.889409 | orchestrator | 2025-07-12 20:19:12.889418 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-12 20:19:12.889427 | orchestrator | Saturday 12 July 2025 20:17:50 +0000 (0:00:37.926) 0:00:53.676 ********* 2025-07-12 20:19:12.889435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.889449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.889457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:19:12.889465 | orchestrator | 2025-07-12 20:19:12.889473 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-12 20:19:12.889481 | orchestrator | Saturday 12 July 2025 20:17:51 +0000 (0:00:01.050) 0:00:54.726 ********* 2025-07-12 20:19:12.889489 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:12.889497 | orchestrator | 2025-07-12 20:19:12.889505 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-12 20:19:12.889513 | orchestrator | Saturday 12 July 2025 20:17:53 +0000 (0:00:02.598) 0:00:57.325 ********* 2025-07-12 20:19:12.889521 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:12.889529 | orchestrator | 2025-07-12 20:19:12.889537 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 20:19:12.889545 | orchestrator | Saturday 12 July 2025 20:17:56 +0000 (0:00:02.311) 0:00:59.637 ********* 2025-07-12 20:19:12.889553 | orchestrator | 2025-07-12 20:19:12.889560 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 20:19:12.889573 | orchestrator | Saturday 12 July 2025 20:17:56 +0000 (0:00:00.273) 0:00:59.910 ********* 2025-07-12 20:19:12.889581 | orchestrator | 2025-07-12 20:19:12.889593 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 20:19:12.889601 | orchestrator | Saturday 12 July 2025 20:17:56 +0000 (0:00:00.064) 0:00:59.975 ********* 2025-07-12 20:19:12.889609 | orchestrator | 2025-07-12 20:19:12.889617 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-12 20:19:12.889625 | orchestrator | Saturday 12 July 2025 20:17:56 +0000 (0:00:00.064) 0:01:00.039 ********* 2025-07-12 20:19:12.889633 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.889641 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.889648 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:12.889656 | orchestrator | 2025-07-12 20:19:12.889666 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-12 20:19:12.889678 | orchestrator | Saturday 12 July 2025 20:18:03 +0000 (0:00:06.854) 0:01:06.894 ********* 2025-07-12 20:19:12.889691 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.889703 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.889716 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-12 20:19:12.889730 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-12 20:19:12.889774 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-12 20:19:12.889785 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:12.889793 | orchestrator | 2025-07-12 20:19:12.889801 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-12 20:19:12.889809 | orchestrator | Saturday 12 July 2025 20:18:42 +0000 (0:00:38.838) 0:01:45.732 ********* 2025-07-12 20:19:12.889816 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.889824 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:12.889832 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:12.889840 | orchestrator | 2025-07-12 20:19:12.889848 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-12 20:19:12.889856 | orchestrator | Saturday 12 July 2025 20:19:05 +0000 (0:00:23.242) 0:02:08.975 ********* 2025-07-12 20:19:12.889863 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:12.889871 | orchestrator | 2025-07-12 20:19:12.889879 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-12 20:19:12.889887 | orchestrator | Saturday 12 July 2025 20:19:07 +0000 (0:00:02.180) 0:02:11.155 ********* 2025-07-12 20:19:12.889895 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.889903 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:12.889910 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:12.889918 | orchestrator | 2025-07-12 20:19:12.889926 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-12 20:19:12.889934 | orchestrator | Saturday 12 July 2025 20:19:07 +0000 (0:00:00.307) 0:02:11.463 ********* 2025-07-12 20:19:12.889943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-12 20:19:12.889953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-12 20:19:12.889961 | orchestrator | 2025-07-12 20:19:12.889969 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-12 20:19:12.889977 | orchestrator | Saturday 12 July 2025 20:19:10 +0000 (0:00:02.314) 0:02:13.777 ********* 2025-07-12 20:19:12.889985 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:12.889992 | orchestrator | 2025-07-12 20:19:12.890000 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:19:12.890010 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:19:12.890047 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:19:12.890057 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:19:12.890065 | orchestrator | 2025-07-12 20:19:12.890073 | orchestrator | 2025-07-12 20:19:12.890081 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:19:12.890089 | orchestrator | Saturday 12 July 2025 20:19:10 +0000 (0:00:00.263) 0:02:14.041 ********* 2025-07-12 20:19:12.890096 | orchestrator | =============================================================================== 2025-07-12 20:19:12.890104 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.84s 2025-07-12 20:19:12.890112 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.93s 2025-07-12 20:19:12.890126 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.24s 2025-07-12 20:19:12.890134 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.85s 2025-07-12 20:19:12.890142 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.60s 2025-07-12 20:19:12.890155 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.31s 2025-07-12 20:19:12.890169 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.31s 2025-07-12 20:19:12.890177 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.18s 2025-07-12 20:19:12.890185 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.85s 2025-07-12 20:19:12.890193 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.65s 2025-07-12 20:19:12.890201 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.53s 2025-07-12 20:19:12.890209 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.41s 2025-07-12 20:19:12.890217 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.34s 2025-07-12 20:19:12.890225 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.13s 2025-07-12 20:19:12.890232 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.05s 2025-07-12 20:19:12.890240 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.05s 2025-07-12 20:19:12.890248 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.93s 2025-07-12 20:19:12.890256 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.79s 2025-07-12 20:19:12.890263 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.76s 2025-07-12 20:19:12.890271 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.61s 2025-07-12 20:19:12.890280 | orchestrator | 2025-07-12 20:19:12 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:12.890288 | orchestrator | 2025-07-12 20:19:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:15.935622 | orchestrator | 2025-07-12 20:19:15.935720 | orchestrator | 2025-07-12 20:19:15.935733 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:19:15.935741 | orchestrator | 2025-07-12 20:19:15.935773 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-12 20:19:15.935780 | orchestrator | Saturday 12 July 2025 20:10:05 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-07-12 20:19:15.935788 | orchestrator | changed: [testbed-manager] 2025-07-12 20:19:15.935796 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.935803 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.935810 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.935817 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.935824 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.935831 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.935837 | orchestrator | 2025-07-12 20:19:15.935844 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:19:15.935875 | orchestrator | Saturday 12 July 2025 20:10:06 +0000 (0:00:00.745) 0:00:00.997 ********* 2025-07-12 20:19:15.935882 | orchestrator | changed: [testbed-manager] 2025-07-12 20:19:15.935889 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.935895 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.935902 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.935909 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.935915 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.935922 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.935929 | orchestrator | 2025-07-12 20:19:15.935936 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:19:15.935943 | orchestrator | Saturday 12 July 2025 20:10:07 +0000 (0:00:00.609) 0:00:01.607 ********* 2025-07-12 20:19:15.935969 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-12 20:19:15.935976 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 20:19:15.935983 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 20:19:15.935990 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 20:19:15.935997 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-12 20:19:15.936003 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-12 20:19:15.936010 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-12 20:19:15.936017 | orchestrator | 2025-07-12 20:19:15.936036 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-12 20:19:15.936044 | orchestrator | 2025-07-12 20:19:15.936056 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 20:19:15.936066 | orchestrator | Saturday 12 July 2025 20:10:08 +0000 (0:00:00.722) 0:00:02.330 ********* 2025-07-12 20:19:15.936110 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.936122 | orchestrator | 2025-07-12 20:19:15.936134 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-12 20:19:15.936165 | orchestrator | Saturday 12 July 2025 20:10:08 +0000 (0:00:00.603) 0:00:02.933 ********* 2025-07-12 20:19:15.936178 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-12 20:19:15.936191 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-12 20:19:15.936202 | orchestrator | 2025-07-12 20:19:15.936210 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-12 20:19:15.936217 | orchestrator | Saturday 12 July 2025 20:10:13 +0000 (0:00:04.368) 0:00:07.301 ********* 2025-07-12 20:19:15.936225 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:19:15.936233 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:19:15.936240 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936256 | orchestrator | 2025-07-12 20:19:15.936264 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 20:19:15.936271 | orchestrator | Saturday 12 July 2025 20:10:17 +0000 (0:00:04.320) 0:00:11.622 ********* 2025-07-12 20:19:15.936279 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936286 | orchestrator | 2025-07-12 20:19:15.936294 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-12 20:19:15.936331 | orchestrator | Saturday 12 July 2025 20:10:18 +0000 (0:00:00.749) 0:00:12.372 ********* 2025-07-12 20:19:15.936340 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936348 | orchestrator | 2025-07-12 20:19:15.936355 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-12 20:19:15.936363 | orchestrator | Saturday 12 July 2025 20:10:19 +0000 (0:00:01.450) 0:00:13.822 ********* 2025-07-12 20:19:15.936409 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936417 | orchestrator | 2025-07-12 20:19:15.936426 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:19:15.936433 | orchestrator | Saturday 12 July 2025 20:10:22 +0000 (0:00:02.803) 0:00:16.625 ********* 2025-07-12 20:19:15.936441 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.936449 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.936455 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.936462 | orchestrator | 2025-07-12 20:19:15.936469 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 20:19:15.936475 | orchestrator | Saturday 12 July 2025 20:10:22 +0000 (0:00:00.478) 0:00:17.104 ********* 2025-07-12 20:19:15.936482 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.936489 | orchestrator | 2025-07-12 20:19:15.936496 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-12 20:19:15.936502 | orchestrator | Saturday 12 July 2025 20:10:53 +0000 (0:00:30.190) 0:00:47.294 ********* 2025-07-12 20:19:15.936509 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936523 | orchestrator | 2025-07-12 20:19:15.936530 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 20:19:15.936537 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:14.273) 0:01:01.568 ********* 2025-07-12 20:19:15.936544 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.936550 | orchestrator | 2025-07-12 20:19:15.936557 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 20:19:15.936564 | orchestrator | Saturday 12 July 2025 20:11:18 +0000 (0:00:11.476) 0:01:13.044 ********* 2025-07-12 20:19:15.936585 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.936592 | orchestrator | 2025-07-12 20:19:15.936599 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-12 20:19:15.936606 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:02.740) 0:01:15.785 ********* 2025-07-12 20:19:15.936613 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.936624 | orchestrator | 2025-07-12 20:19:15.936635 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:19:15.936646 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.664) 0:01:16.449 ********* 2025-07-12 20:19:15.936657 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.936668 | orchestrator | 2025-07-12 20:19:15.936677 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 20:19:15.936686 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.424) 0:01:16.874 ********* 2025-07-12 20:19:15.936696 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.936706 | orchestrator | 2025-07-12 20:19:15.936717 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 20:19:15.936729 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:18.026) 0:01:34.900 ********* 2025-07-12 20:19:15.936737 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.936771 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.936783 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.936795 | orchestrator | 2025-07-12 20:19:15.936806 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-12 20:19:15.936816 | orchestrator | 2025-07-12 20:19:15.936826 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 20:19:15.936832 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:00.319) 0:01:35.220 ********* 2025-07-12 20:19:15.936839 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.936846 | orchestrator | 2025-07-12 20:19:15.936852 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-12 20:19:15.936859 | orchestrator | Saturday 12 July 2025 20:11:41 +0000 (0:00:00.568) 0:01:35.788 ********* 2025-07-12 20:19:15.936865 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.936872 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.936879 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936885 | orchestrator | 2025-07-12 20:19:15.936892 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-12 20:19:15.936899 | orchestrator | Saturday 12 July 2025 20:11:43 +0000 (0:00:02.134) 0:01:37.923 ********* 2025-07-12 20:19:15.936905 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.936912 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.936918 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.936925 | orchestrator | 2025-07-12 20:19:15.936932 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 20:19:15.936938 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:02.281) 0:01:40.204 ********* 2025-07-12 20:19:15.936945 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.936951 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.936958 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.936965 | orchestrator | 2025-07-12 20:19:15.936971 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 20:19:15.936978 | orchestrator | Saturday 12 July 2025 20:11:46 +0000 (0:00:00.659) 0:01:40.864 ********* 2025-07-12 20:19:15.936992 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:19:15.936999 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937005 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:19:15.937012 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937019 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 20:19:15.937026 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-12 20:19:15.937032 | orchestrator | 2025-07-12 20:19:15.937039 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 20:19:15.937051 | orchestrator | Saturday 12 July 2025 20:11:55 +0000 (0:00:09.250) 0:01:50.115 ********* 2025-07-12 20:19:15.937058 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.937065 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937071 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937078 | orchestrator | 2025-07-12 20:19:15.937085 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 20:19:15.937097 | orchestrator | Saturday 12 July 2025 20:11:56 +0000 (0:00:00.730) 0:01:50.845 ********* 2025-07-12 20:19:15.937108 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 20:19:15.937119 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.937130 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:19:15.937141 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937152 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:19:15.937164 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937174 | orchestrator | 2025-07-12 20:19:15.937186 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 20:19:15.937195 | orchestrator | Saturday 12 July 2025 20:11:58 +0000 (0:00:01.899) 0:01:52.745 ********* 2025-07-12 20:19:15.937202 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.937208 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937215 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937222 | orchestrator | 2025-07-12 20:19:15.937228 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-12 20:19:15.937235 | orchestrator | Saturday 12 July 2025 20:11:59 +0000 (0:00:01.444) 0:01:54.189 ********* 2025-07-12 20:19:15.937241 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937248 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937255 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.937261 | orchestrator | 2025-07-12 20:19:15.937268 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-12 20:19:15.937275 | orchestrator | Saturday 12 July 2025 20:12:01 +0000 (0:00:01.190) 0:01:55.380 ********* 2025-07-12 20:19:15.937282 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937288 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937303 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.937310 | orchestrator | 2025-07-12 20:19:15.937316 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-12 20:19:15.937323 | orchestrator | Saturday 12 July 2025 20:12:03 +0000 (0:00:02.779) 0:01:58.159 ********* 2025-07-12 20:19:15.937330 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937336 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937343 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.937350 | orchestrator | 2025-07-12 20:19:15.937357 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 20:19:15.937364 | orchestrator | Saturday 12 July 2025 20:12:22 +0000 (0:00:19.089) 0:02:17.248 ********* 2025-07-12 20:19:15.937370 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937377 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937384 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.937391 | orchestrator | 2025-07-12 20:19:15.937397 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 20:19:15.937411 | orchestrator | Saturday 12 July 2025 20:12:34 +0000 (0:00:11.339) 0:02:28.588 ********* 2025-07-12 20:19:15.937418 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.937425 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937435 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937445 | orchestrator | 2025-07-12 20:19:15.937454 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-12 20:19:15.937471 | orchestrator | Saturday 12 July 2025 20:12:35 +0000 (0:00:01.107) 0:02:29.695 ********* 2025-07-12 20:19:15.937483 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937503 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.937514 | orchestrator | 2025-07-12 20:19:15.937524 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-12 20:19:15.937534 | orchestrator | Saturday 12 July 2025 20:12:47 +0000 (0:00:12.460) 0:02:42.155 ********* 2025-07-12 20:19:15.937544 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.937555 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937566 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937577 | orchestrator | 2025-07-12 20:19:15.937588 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 20:19:15.937598 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:01.251) 0:02:43.406 ********* 2025-07-12 20:19:15.937610 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.937617 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.937624 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.937630 | orchestrator | 2025-07-12 20:19:15.937637 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-12 20:19:15.937643 | orchestrator | 2025-07-12 20:19:15.937650 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:19:15.937657 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:00.292) 0:02:43.699 ********* 2025-07-12 20:19:15.937663 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.937671 | orchestrator | 2025-07-12 20:19:15.937683 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-12 20:19:15.937699 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:00.481) 0:02:44.181 ********* 2025-07-12 20:19:15.937711 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-12 20:19:15.937722 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-12 20:19:15.937732 | orchestrator | 2025-07-12 20:19:15.937743 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-12 20:19:15.937775 | orchestrator | Saturday 12 July 2025 20:12:53 +0000 (0:00:03.138) 0:02:47.319 ********* 2025-07-12 20:19:15.937786 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-12 20:19:15.937808 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-12 20:19:15.937821 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-12 20:19:15.937829 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-12 20:19:15.937836 | orchestrator | 2025-07-12 20:19:15.937843 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-12 20:19:15.937849 | orchestrator | Saturday 12 July 2025 20:12:59 +0000 (0:00:06.592) 0:02:53.911 ********* 2025-07-12 20:19:15.937856 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:19:15.937863 | orchestrator | 2025-07-12 20:19:15.937870 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-12 20:19:15.937876 | orchestrator | Saturday 12 July 2025 20:13:03 +0000 (0:00:03.365) 0:02:57.277 ********* 2025-07-12 20:19:15.937883 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:19:15.937897 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-12 20:19:15.937904 | orchestrator | 2025-07-12 20:19:15.937911 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-12 20:19:15.937918 | orchestrator | Saturday 12 July 2025 20:13:07 +0000 (0:00:03.991) 0:03:01.269 ********* 2025-07-12 20:19:15.937925 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:19:15.937932 | orchestrator | 2025-07-12 20:19:15.937939 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-12 20:19:15.937945 | orchestrator | Saturday 12 July 2025 20:13:10 +0000 (0:00:03.425) 0:03:04.694 ********* 2025-07-12 20:19:15.937952 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-12 20:19:15.937959 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-12 20:19:15.937965 | orchestrator | 2025-07-12 20:19:15.937972 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 20:19:15.937985 | orchestrator | Saturday 12 July 2025 20:13:18 +0000 (0:00:07.597) 0:03:12.291 ********* 2025-07-12 20:19:15.937998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938121 | orchestrator | 2025-07-12 20:19:15.938128 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-12 20:19:15.938135 | orchestrator | Saturday 12 July 2025 20:13:19 +0000 (0:00:01.482) 0:03:13.774 ********* 2025-07-12 20:19:15.938141 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.938148 | orchestrator | 2025-07-12 20:19:15.938159 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-12 20:19:15.938170 | orchestrator | Saturday 12 July 2025 20:13:19 +0000 (0:00:00.265) 0:03:14.040 ********* 2025-07-12 20:19:15.938179 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.938197 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.938211 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.938221 | orchestrator | 2025-07-12 20:19:15.938231 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-12 20:19:15.938240 | orchestrator | Saturday 12 July 2025 20:13:20 +0000 (0:00:00.800) 0:03:14.841 ********* 2025-07-12 20:19:15.938251 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:19:15.938261 | orchestrator | 2025-07-12 20:19:15.938271 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-12 20:19:15.938281 | orchestrator | Saturday 12 July 2025 20:13:21 +0000 (0:00:00.650) 0:03:15.492 ********* 2025-07-12 20:19:15.938291 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.938310 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.938319 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.938329 | orchestrator | 2025-07-12 20:19:15.938338 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:19:15.938347 | orchestrator | Saturday 12 July 2025 20:13:21 +0000 (0:00:00.299) 0:03:15.791 ********* 2025-07-12 20:19:15.938356 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.938366 | orchestrator | 2025-07-12 20:19:15.938376 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 20:19:15.938391 | orchestrator | Saturday 12 July 2025 20:13:23 +0000 (0:00:01.579) 0:03:17.371 ********* 2025-07-12 20:19:15.938403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938503 | orchestrator | 2025-07-12 20:19:15.938514 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 20:19:15.938525 | orchestrator | Saturday 12 July 2025 20:13:25 +0000 (0:00:02.756) 0:03:20.128 ********* 2025-07-12 20:19:15.938536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.938549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.938576 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.938592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.938603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.938614 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.938632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.938645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.938663 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.938674 | orchestrator | 2025-07-12 20:19:15.938685 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 20:19:15.938696 | orchestrator | Saturday 12 July 2025 20:13:27 +0000 (0:00:02.033) 0:03:22.162 ********* 2025-07-12 20:19:15.938713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.938727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.938737 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.938813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2025-07-12 20:19:15 | INFO  | Task b71a8780-d9da-4ef6-9dbf-8b115d889807 is in state SUCCESS 2025-07-12 20:19:15.938824 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.938833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.938847 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.938858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.938866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.938873 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.938880 | orchestrator | 2025-07-12 20:19:15.938886 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-12 20:19:15.938893 | orchestrator | Saturday 12 July 2025 20:13:29 +0000 (0:00:01.833) 0:03:23.996 ********* 2025-07-12 20:19:15.938906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.938939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.938967 | orchestrator | 2025-07-12 20:19:15.938974 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-12 20:19:15.938986 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:03.018) 0:03:27.014 ********* 2025-07-12 20:19:15.938993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.939004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.939018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.939026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.939037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.939045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.939052 | orchestrator | 2025-07-12 20:19:15.939058 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-12 20:19:15.939065 | orchestrator | Saturday 12 July 2025 20:13:45 +0000 (0:00:12.567) 0:03:39.582 ********* 2025-07-12 20:19:15.939076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.939130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.939137 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.939145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.939157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.939165 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.939176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:19:15.939183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.939190 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.939197 | orchestrator | 2025-07-12 20:19:15.939204 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-12 20:19:15.939210 | orchestrator | Saturday 12 July 2025 20:13:45 +0000 (0:00:00.664) 0:03:40.246 ********* 2025-07-12 20:19:15.939222 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.939229 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.939236 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.939247 | orchestrator | 2025-07-12 20:19:15.939254 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-12 20:19:15.939261 | orchestrator | Saturday 12 July 2025 20:13:48 +0000 (0:00:02.882) 0:03:43.128 ********* 2025-07-12 20:19:15.939267 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.939274 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.939281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.939287 | orchestrator | 2025-07-12 20:19:15.939294 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-12 20:19:15.939301 | orchestrator | Saturday 12 July 2025 20:13:49 +0000 (0:00:00.514) 0:03:43.642 ********* 2025-07-12 20:19:15.939308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.939323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.939336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:19:15.939349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.939356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.939363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.939370 | orchestrator | 2025-07-12 20:19:15.939377 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 20:19:15.939384 | orchestrator | Saturday 12 July 2025 20:13:51 +0000 (0:00:02.033) 0:03:45.676 ********* 2025-07-12 20:19:15.939390 | orchestrator | 2025-07-12 20:19:15.939397 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 20:19:15.939404 | orchestrator | Saturday 12 July 2025 20:13:51 +0000 (0:00:00.366) 0:03:46.043 ********* 2025-07-12 20:19:15.939410 | orchestrator | 2025-07-12 20:19:15.939417 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 20:19:15.939424 | orchestrator | Saturday 12 July 2025 20:13:52 +0000 (0:00:00.277) 0:03:46.320 ********* 2025-07-12 20:19:15.939430 | orchestrator | 2025-07-12 20:19:15.939437 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-12 20:19:15.939444 | orchestrator | Saturday 12 July 2025 20:13:52 +0000 (0:00:00.769) 0:03:47.089 ********* 2025-07-12 20:19:15.939450 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.939457 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.939463 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.939470 | orchestrator | 2025-07-12 20:19:15.939477 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-12 20:19:15.939487 | orchestrator | Saturday 12 July 2025 20:14:17 +0000 (0:00:24.583) 0:04:11.673 ********* 2025-07-12 20:19:15.939494 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.939501 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.939508 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.939514 | orchestrator | 2025-07-12 20:19:15.939521 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-12 20:19:15.939527 | orchestrator | 2025-07-12 20:19:15.939534 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:19:15.939567 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:06.685) 0:04:18.358 ********* 2025-07-12 20:19:15.939575 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.939582 | orchestrator | 2025-07-12 20:19:15.939589 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:19:15.939595 | orchestrator | Saturday 12 July 2025 20:14:25 +0000 (0:00:01.646) 0:04:20.005 ********* 2025-07-12 20:19:15.939602 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.939608 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.939615 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.939622 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.939628 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.939635 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.939641 | orchestrator | 2025-07-12 20:19:15.939648 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-12 20:19:15.939655 | orchestrator | Saturday 12 July 2025 20:14:27 +0000 (0:00:02.202) 0:04:22.207 ********* 2025-07-12 20:19:15.939661 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.939668 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.939675 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.939799 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:19:15.939807 | orchestrator | 2025-07-12 20:19:15.939819 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 20:19:15.939827 | orchestrator | Saturday 12 July 2025 20:14:30 +0000 (0:00:02.374) 0:04:24.581 ********* 2025-07-12 20:19:15.939834 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-12 20:19:15.939841 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-12 20:19:15.939847 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-12 20:19:15.939854 | orchestrator | 2025-07-12 20:19:15.939861 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 20:19:15.939868 | orchestrator | Saturday 12 July 2025 20:14:31 +0000 (0:00:01.077) 0:04:25.659 ********* 2025-07-12 20:19:15.939874 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-12 20:19:15.939881 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-12 20:19:15.939888 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-12 20:19:15.939894 | orchestrator | 2025-07-12 20:19:15.939901 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 20:19:15.939908 | orchestrator | Saturday 12 July 2025 20:14:32 +0000 (0:00:01.253) 0:04:26.912 ********* 2025-07-12 20:19:15.939914 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-12 20:19:15.939921 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.939928 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-12 20:19:15.939935 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.939941 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-12 20:19:15.939948 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.939955 | orchestrator | 2025-07-12 20:19:15.939961 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-12 20:19:15.939968 | orchestrator | Saturday 12 July 2025 20:14:33 +0000 (0:00:01.204) 0:04:28.116 ********* 2025-07-12 20:19:15.939975 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:19:15.939981 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:19:15.939988 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.939995 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:19:15.940001 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:19:15.940015 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.940022 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:19:15.940028 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:19:15.940035 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.940041 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 20:19:15.940048 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 20:19:15.940055 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 20:19:15.940062 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 20:19:15.940068 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 20:19:15.940075 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 20:19:15.940082 | orchestrator | 2025-07-12 20:19:15.940088 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-12 20:19:15.940095 | orchestrator | Saturday 12 July 2025 20:14:35 +0000 (0:00:01.281) 0:04:29.398 ********* 2025-07-12 20:19:15.940102 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.940109 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.940115 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.940122 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.940134 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.940141 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.940147 | orchestrator | 2025-07-12 20:19:15.940154 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-12 20:19:15.940496 | orchestrator | Saturday 12 July 2025 20:14:36 +0000 (0:00:01.632) 0:04:31.030 ********* 2025-07-12 20:19:15.940509 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.940516 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.940523 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.940530 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.940536 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.940543 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.940550 | orchestrator | 2025-07-12 20:19:15.940556 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 20:19:15.940563 | orchestrator | Saturday 12 July 2025 20:14:38 +0000 (0:00:02.044) 0:04:33.075 ********* 2025-07-12 20:19:15.940570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940580 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940620 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940665 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940723 | orchestrator | 2025-07-12 20:19:15.940730 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:19:15.940737 | orchestrator | Saturday 12 July 2025 20:14:43 +0000 (0:00:04.345) 0:04:37.421 ********* 2025-07-12 20:19:15.940763 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:15.940771 | orchestrator | 2025-07-12 20:19:15.940778 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 20:19:15.940785 | orchestrator | Saturday 12 July 2025 20:14:45 +0000 (0:00:01.993) 0:04:39.415 ********* 2025-07-12 20:19:15.940792 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.940861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941370 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.941382 | orchestrator | 2025-07-12 20:19:15.941389 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 20:19:15.941397 | orchestrator | Saturday 12 July 2025 20:14:50 +0000 (0:00:05.099) 0:04:44.515 ********* 2025-07-12 20:19:15.941404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.941413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.941420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941427 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.941458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.941466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.941480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941487 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.941494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.941501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.941508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941515 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.941545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.941554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941566 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.941573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.941581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941587 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.941594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.941601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941608 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.941615 | orchestrator | 2025-07-12 20:19:15.941622 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 20:19:15.941629 | orchestrator | Saturday 12 July 2025 20:14:54 +0000 (0:00:03.910) 0:04:48.426 ********* 2025-07-12 20:19:15.941657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.941670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.941677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941684 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.941692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.941699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.941710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.941740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.941798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941806 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.941813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941820 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.941827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.941834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941841 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.941854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.941889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941898 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.941906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.941925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.941933 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.941940 | orchestrator | 2025-07-12 20:19:15.941947 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:19:15.941955 | orchestrator | Saturday 12 July 2025 20:14:58 +0000 (0:00:03.964) 0:04:52.390 ********* 2025-07-12 20:19:15.941962 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.941970 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.941978 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.941985 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:19:15.941993 | orchestrator | 2025-07-12 20:19:15.942001 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-12 20:19:15.942009 | orchestrator | Saturday 12 July 2025 20:14:59 +0000 (0:00:01.087) 0:04:53.477 ********* 2025-07-12 20:19:15.942039 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:19:15.942049 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:19:15.942057 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:19:15.942064 | orchestrator | 2025-07-12 20:19:15.942072 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-12 20:19:15.942081 | orchestrator | Saturday 12 July 2025 20:15:01 +0000 (0:00:02.498) 0:04:55.975 ********* 2025-07-12 20:19:15.942088 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:19:15.942096 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:19:15.942102 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:19:15.942109 | orchestrator | 2025-07-12 20:19:15.942116 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-12 20:19:15.942122 | orchestrator | Saturday 12 July 2025 20:15:02 +0000 (0:00:00.901) 0:04:56.877 ********* 2025-07-12 20:19:15.942135 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:19:15.942142 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:19:15.942149 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:19:15.942155 | orchestrator | 2025-07-12 20:19:15.942162 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-12 20:19:15.942169 | orchestrator | Saturday 12 July 2025 20:15:03 +0000 (0:00:00.685) 0:04:57.562 ********* 2025-07-12 20:19:15.942176 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:19:15.942183 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:19:15.942189 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:19:15.942196 | orchestrator | 2025-07-12 20:19:15.942203 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-12 20:19:15.942209 | orchestrator | Saturday 12 July 2025 20:15:03 +0000 (0:00:00.610) 0:04:58.173 ********* 2025-07-12 20:19:15.942216 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 20:19:15.942223 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 20:19:15.942230 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 20:19:15.942237 | orchestrator | 2025-07-12 20:19:15.942243 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-12 20:19:15.942250 | orchestrator | Saturday 12 July 2025 20:15:05 +0000 (0:00:01.318) 0:04:59.491 ********* 2025-07-12 20:19:15.942257 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 20:19:15.942268 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 20:19:15.942275 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 20:19:15.942282 | orchestrator | 2025-07-12 20:19:15.942289 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-12 20:19:15.942317 | orchestrator | Saturday 12 July 2025 20:15:06 +0000 (0:00:01.285) 0:05:00.777 ********* 2025-07-12 20:19:15.942325 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 20:19:15.942332 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 20:19:15.942338 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 20:19:15.942345 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-12 20:19:15.942351 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-12 20:19:15.942358 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-12 20:19:15.942367 | orchestrator | 2025-07-12 20:19:15.942378 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-12 20:19:15.942393 | orchestrator | Saturday 12 July 2025 20:15:10 +0000 (0:00:04.441) 0:05:05.218 ********* 2025-07-12 20:19:15.942410 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.942419 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.942430 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.942441 | orchestrator | 2025-07-12 20:19:15.942452 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-12 20:19:15.942463 | orchestrator | Saturday 12 July 2025 20:15:11 +0000 (0:00:00.438) 0:05:05.657 ********* 2025-07-12 20:19:15.942474 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.942486 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.942498 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.942510 | orchestrator | 2025-07-12 20:19:15.942522 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-12 20:19:15.942534 | orchestrator | Saturday 12 July 2025 20:15:11 +0000 (0:00:00.320) 0:05:05.977 ********* 2025-07-12 20:19:15.942543 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.942550 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.942557 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.942563 | orchestrator | 2025-07-12 20:19:15.942570 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-12 20:19:15.942577 | orchestrator | Saturday 12 July 2025 20:15:13 +0000 (0:00:01.687) 0:05:07.665 ********* 2025-07-12 20:19:15.942591 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 20:19:15.942599 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 20:19:15.942606 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 20:19:15.942612 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 20:19:15.942619 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 20:19:15.942626 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 20:19:15.942633 | orchestrator | 2025-07-12 20:19:15.942639 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-12 20:19:15.942646 | orchestrator | Saturday 12 July 2025 20:15:16 +0000 (0:00:03.372) 0:05:11.037 ********* 2025-07-12 20:19:15.942653 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:19:15.942659 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:19:15.942666 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:19:15.942673 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:19:15.942679 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.942686 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:19:15.942693 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.942699 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:19:15.942706 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.942713 | orchestrator | 2025-07-12 20:19:15.942719 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-12 20:19:15.942726 | orchestrator | Saturday 12 July 2025 20:15:20 +0000 (0:00:03.430) 0:05:14.467 ********* 2025-07-12 20:19:15.942733 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.942740 | orchestrator | 2025-07-12 20:19:15.942764 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-12 20:19:15.942772 | orchestrator | Saturday 12 July 2025 20:15:20 +0000 (0:00:00.134) 0:05:14.602 ********* 2025-07-12 20:19:15.942778 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.942785 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.942792 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.942799 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.942806 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.942813 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.942820 | orchestrator | 2025-07-12 20:19:15.942826 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-12 20:19:15.942833 | orchestrator | Saturday 12 July 2025 20:15:21 +0000 (0:00:00.890) 0:05:15.492 ********* 2025-07-12 20:19:15.942840 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:19:15.942847 | orchestrator | 2025-07-12 20:19:15.942854 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-12 20:19:15.942861 | orchestrator | Saturday 12 July 2025 20:15:21 +0000 (0:00:00.731) 0:05:16.224 ********* 2025-07-12 20:19:15.942867 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.942882 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.942890 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.942896 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.942903 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.942910 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.942917 | orchestrator | 2025-07-12 20:19:15.942953 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-12 20:19:15.942961 | orchestrator | Saturday 12 July 2025 20:15:22 +0000 (0:00:00.690) 0:05:16.914 ********* 2025-07-12 20:19:15.942978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.942986 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.942994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943066 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943138 | orchestrator | 2025-07-12 20:19:15.943148 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-12 20:19:15.943159 | orchestrator | Saturday 12 July 2025 20:15:27 +0000 (0:00:04.742) 0:05:21.657 ********* 2025-07-12 20:19:15.943171 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.943184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.943212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.943221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.943228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.943235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.943242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.943334 | orchestrator | 2025-07-12 20:19:15.943341 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-12 20:19:15.943348 | orchestrator | Saturday 12 July 2025 20:15:35 +0000 (0:00:07.612) 0:05:29.270 ********* 2025-07-12 20:19:15.943354 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.943361 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.943368 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.943375 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.943381 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.943388 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.943395 | orchestrator | 2025-07-12 20:19:15.943401 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-12 20:19:15.943408 | orchestrator | Saturday 12 July 2025 20:15:36 +0000 (0:00:01.733) 0:05:31.004 ********* 2025-07-12 20:19:15.943415 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 20:19:15.943422 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 20:19:15.943428 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 20:19:15.943435 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 20:19:15.943442 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 20:19:15.943448 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 20:19:15.943455 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 20:19:15.943462 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.943468 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 20:19:15.943475 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.943482 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 20:19:15.943488 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.943495 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 20:19:15.943502 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 20:19:15.943509 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 20:19:15.943515 | orchestrator | 2025-07-12 20:19:15.943522 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-12 20:19:15.943529 | orchestrator | Saturday 12 July 2025 20:15:41 +0000 (0:00:04.342) 0:05:35.346 ********* 2025-07-12 20:19:15.943540 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.943547 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.943554 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.943560 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.943567 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.943574 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.943580 | orchestrator | 2025-07-12 20:19:15.943587 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-12 20:19:15.943594 | orchestrator | Saturday 12 July 2025 20:15:41 +0000 (0:00:00.853) 0:05:36.199 ********* 2025-07-12 20:19:15.943603 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 20:19:15.943615 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 20:19:15.943628 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 20:19:15.943639 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 20:19:15.943707 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 20:19:15.943717 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 20:19:15.943724 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 20:19:15.943730 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 20:19:15.943737 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 20:19:15.943770 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:19:15.943794 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 20:19:15.943801 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.943816 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 20:19:15.943827 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.943837 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 20:19:15.943847 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.943857 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:19:15.943868 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:19:15.943879 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:19:15.943891 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:19:15.943903 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:19:15.943912 | orchestrator | 2025-07-12 20:19:15.943920 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-12 20:19:15.943927 | orchestrator | Saturday 12 July 2025 20:15:47 +0000 (0:00:05.873) 0:05:42.073 ********* 2025-07-12 20:19:15.943933 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:19:15.943941 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:19:15.943947 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:19:15.943961 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 20:19:15.943968 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 20:19:15.943974 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:19:15.943981 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:19:15.943988 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 20:19:15.943994 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:19:15.944001 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:19:15.944008 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:19:15.944014 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:19:15.944021 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 20:19:15.944028 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.944034 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:19:15.944041 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:19:15.944048 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 20:19:15.944054 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.944061 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 20:19:15.944067 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.944074 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:19:15.944081 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:19:15.944087 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:19:15.944094 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:19:15.944101 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:19:15.944107 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:19:15.944114 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:19:15.944121 | orchestrator | 2025-07-12 20:19:15.944127 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-12 20:19:15.944134 | orchestrator | Saturday 12 July 2025 20:15:55 +0000 (0:00:07.446) 0:05:49.519 ********* 2025-07-12 20:19:15.944141 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.944147 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.944154 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.944160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.944167 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.944174 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.944180 | orchestrator | 2025-07-12 20:19:15.944187 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-12 20:19:15.944201 | orchestrator | Saturday 12 July 2025 20:15:55 +0000 (0:00:00.587) 0:05:50.107 ********* 2025-07-12 20:19:15.944208 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.944215 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.944222 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.944233 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.944240 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.944247 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.944259 | orchestrator | 2025-07-12 20:19:15.944266 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-12 20:19:15.944273 | orchestrator | Saturday 12 July 2025 20:15:57 +0000 (0:00:01.300) 0:05:51.408 ********* 2025-07-12 20:19:15.944279 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.944286 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.944292 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.944299 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.944306 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.944316 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.944327 | orchestrator | 2025-07-12 20:19:15.944336 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-12 20:19:15.944346 | orchestrator | Saturday 12 July 2025 20:16:01 +0000 (0:00:03.920) 0:05:55.328 ********* 2025-07-12 20:19:15.944356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.944369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.944381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.944393 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.944403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.944426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.944435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.944446 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.944459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.944470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.944482 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.944493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.944501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.944513 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.944529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:19:15.944537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:19:15.944544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.944551 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.944558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:19:15.944565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:19:15.944572 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.944579 | orchestrator | 2025-07-12 20:19:15.944586 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-12 20:19:15.944592 | orchestrator | Saturday 12 July 2025 20:16:03 +0000 (0:00:02.379) 0:05:57.707 ********* 2025-07-12 20:19:15.944599 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 20:19:15.944610 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 20:19:15.944616 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 20:19:15.944623 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 20:19:15.944630 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.944637 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 20:19:15.944643 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 20:19:15.944650 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.944657 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 20:19:15.944663 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 20:19:15.944670 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.944677 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 20:19:15.944683 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.944694 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 20:19:15.944701 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.944708 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 20:19:15.944718 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 20:19:15.944725 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.944732 | orchestrator | 2025-07-12 20:19:15.944739 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-12 20:19:15.944794 | orchestrator | Saturday 12 July 2025 20:16:04 +0000 (0:00:00.643) 0:05:58.351 ********* 2025-07-12 20:19:15.944802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944818 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944936 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:19:15.944965 | orchestrator | 2025-07-12 20:19:15.944973 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:19:15.944983 | orchestrator | Saturday 12 July 2025 20:16:07 +0000 (0:00:03.418) 0:06:01.769 ********* 2025-07-12 20:19:15.944993 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.945004 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.945014 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.945026 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.945032 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.945038 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.945045 | orchestrator | 2025-07-12 20:19:15.945051 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:19:15.945057 | orchestrator | Saturday 12 July 2025 20:16:08 +0000 (0:00:00.900) 0:06:02.670 ********* 2025-07-12 20:19:15.945064 | orchestrator | 2025-07-12 20:19:15.945070 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:19:15.945076 | orchestrator | Saturday 12 July 2025 20:16:08 +0000 (0:00:00.312) 0:06:02.983 ********* 2025-07-12 20:19:15.945082 | orchestrator | 2025-07-12 20:19:15.945088 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:19:15.945095 | orchestrator | Saturday 12 July 2025 20:16:08 +0000 (0:00:00.144) 0:06:03.127 ********* 2025-07-12 20:19:15.945101 | orchestrator | 2025-07-12 20:19:15.945107 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:19:15.945113 | orchestrator | Saturday 12 July 2025 20:16:08 +0000 (0:00:00.127) 0:06:03.255 ********* 2025-07-12 20:19:15.945119 | orchestrator | 2025-07-12 20:19:15.945126 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:19:15.945132 | orchestrator | Saturday 12 July 2025 20:16:09 +0000 (0:00:00.128) 0:06:03.384 ********* 2025-07-12 20:19:15.945138 | orchestrator | 2025-07-12 20:19:15.945145 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:19:15.945151 | orchestrator | Saturday 12 July 2025 20:16:09 +0000 (0:00:00.121) 0:06:03.505 ********* 2025-07-12 20:19:15.945157 | orchestrator | 2025-07-12 20:19:15.945163 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-12 20:19:15.945169 | orchestrator | Saturday 12 July 2025 20:16:09 +0000 (0:00:00.123) 0:06:03.629 ********* 2025-07-12 20:19:15.945178 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.945189 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.945199 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.945209 | orchestrator | 2025-07-12 20:19:15.945225 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-12 20:19:15.945247 | orchestrator | Saturday 12 July 2025 20:16:21 +0000 (0:00:12.173) 0:06:15.802 ********* 2025-07-12 20:19:15.945258 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.945264 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.945271 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.945277 | orchestrator | 2025-07-12 20:19:15.945283 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-12 20:19:15.945289 | orchestrator | Saturday 12 July 2025 20:16:37 +0000 (0:00:15.863) 0:06:31.666 ********* 2025-07-12 20:19:15.945295 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.945302 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.945308 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.945314 | orchestrator | 2025-07-12 20:19:15.945320 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-12 20:19:15.945326 | orchestrator | Saturday 12 July 2025 20:17:00 +0000 (0:00:22.630) 0:06:54.296 ********* 2025-07-12 20:19:15.945332 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.945339 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.945345 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.945356 | orchestrator | 2025-07-12 20:19:15.945362 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-12 20:19:15.945369 | orchestrator | Saturday 12 July 2025 20:17:40 +0000 (0:00:40.468) 0:07:34.765 ********* 2025-07-12 20:19:15.945375 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.945381 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.945387 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.945393 | orchestrator | 2025-07-12 20:19:15.945399 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-12 20:19:15.945405 | orchestrator | Saturday 12 July 2025 20:17:41 +0000 (0:00:00.942) 0:07:35.707 ********* 2025-07-12 20:19:15.945412 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.945418 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.945424 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.945430 | orchestrator | 2025-07-12 20:19:15.945436 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-12 20:19:15.945443 | orchestrator | Saturday 12 July 2025 20:17:42 +0000 (0:00:00.833) 0:07:36.540 ********* 2025-07-12 20:19:15.945449 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:19:15.945455 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:19:15.945461 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:19:15.945467 | orchestrator | 2025-07-12 20:19:15.945474 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-12 20:19:15.945480 | orchestrator | Saturday 12 July 2025 20:18:04 +0000 (0:00:22.198) 0:07:58.739 ********* 2025-07-12 20:19:15.945486 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.945492 | orchestrator | 2025-07-12 20:19:15.945498 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-12 20:19:15.945505 | orchestrator | Saturday 12 July 2025 20:18:04 +0000 (0:00:00.124) 0:07:58.863 ********* 2025-07-12 20:19:15.945511 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.945517 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.945523 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.945529 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.945536 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.945542 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-12 20:19:15.945548 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:19:15.945554 | orchestrator | 2025-07-12 20:19:15.945561 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-12 20:19:15.945567 | orchestrator | Saturday 12 July 2025 20:18:25 +0000 (0:00:20.930) 0:08:19.794 ********* 2025-07-12 20:19:15.945573 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.945579 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.945585 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.945592 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.945598 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.945604 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.945610 | orchestrator | 2025-07-12 20:19:15.945616 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-12 20:19:15.945622 | orchestrator | Saturday 12 July 2025 20:18:34 +0000 (0:00:09.066) 0:08:28.860 ********* 2025-07-12 20:19:15.945629 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.945635 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.945641 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.945647 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.945653 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.945659 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-07-12 20:19:15.945665 | orchestrator | 2025-07-12 20:19:15.945672 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 20:19:15.945678 | orchestrator | Saturday 12 July 2025 20:18:39 +0000 (0:00:04.924) 0:08:33.785 ********* 2025-07-12 20:19:15.945691 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:19:15.945697 | orchestrator | 2025-07-12 20:19:15.945704 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 20:19:15.945710 | orchestrator | Saturday 12 July 2025 20:18:53 +0000 (0:00:14.020) 0:08:47.805 ********* 2025-07-12 20:19:15.945716 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:19:15.945722 | orchestrator | 2025-07-12 20:19:15.945728 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-12 20:19:15.945734 | orchestrator | Saturday 12 July 2025 20:18:54 +0000 (0:00:01.444) 0:08:49.250 ********* 2025-07-12 20:19:15.945741 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.945766 | orchestrator | 2025-07-12 20:19:15.945772 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-12 20:19:15.945779 | orchestrator | Saturday 12 July 2025 20:18:56 +0000 (0:00:01.324) 0:08:50.575 ********* 2025-07-12 20:19:15.945789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:19:15.945796 | orchestrator | 2025-07-12 20:19:15.945802 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-12 20:19:15.945812 | orchestrator | Saturday 12 July 2025 20:19:07 +0000 (0:00:11.514) 0:09:02.089 ********* 2025-07-12 20:19:15.945818 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:19:15.945824 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:19:15.945831 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:19:15.945837 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:15.945843 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:19:15.945849 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:19:15.945855 | orchestrator | 2025-07-12 20:19:15.945862 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-12 20:19:15.945868 | orchestrator | 2025-07-12 20:19:15.945874 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-12 20:19:15.945881 | orchestrator | Saturday 12 July 2025 20:19:09 +0000 (0:00:01.645) 0:09:03.735 ********* 2025-07-12 20:19:15.945887 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:15.945893 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:15.945899 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:15.945905 | orchestrator | 2025-07-12 20:19:15.945911 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-12 20:19:15.945918 | orchestrator | 2025-07-12 20:19:15.945924 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-12 20:19:15.945930 | orchestrator | Saturday 12 July 2025 20:19:10 +0000 (0:00:01.151) 0:09:04.886 ********* 2025-07-12 20:19:15.945936 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.945943 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.945949 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.945955 | orchestrator | 2025-07-12 20:19:15.945961 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-12 20:19:15.945967 | orchestrator | 2025-07-12 20:19:15.945974 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-12 20:19:15.945980 | orchestrator | Saturday 12 July 2025 20:19:11 +0000 (0:00:00.513) 0:09:05.400 ********* 2025-07-12 20:19:15.945986 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-12 20:19:15.945992 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 20:19:15.945998 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 20:19:15.946005 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-12 20:19:15.946011 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-12 20:19:15.946045 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-12 20:19:15.946056 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:19:15.946065 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-12 20:19:15.946075 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 20:19:15.946092 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 20:19:15.946102 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-12 20:19:15.946111 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-12 20:19:15.946122 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-12 20:19:15.946133 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:19:15.946144 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-12 20:19:15.946150 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 20:19:15.946157 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 20:19:15.946163 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-12 20:19:15.946169 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-12 20:19:15.946175 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-12 20:19:15.946181 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:19:15.946187 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-12 20:19:15.946194 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 20:19:15.946200 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 20:19:15.946206 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-12 20:19:15.946212 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-12 20:19:15.946218 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-12 20:19:15.946224 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.946230 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-12 20:19:15.946236 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 20:19:15.946242 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 20:19:15.946249 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-12 20:19:15.946255 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-12 20:19:15.946261 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-12 20:19:15.946267 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.946273 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-12 20:19:15.946279 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 20:19:15.946285 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 20:19:15.946291 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-12 20:19:15.946298 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-12 20:19:15.946304 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-12 20:19:15.946310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.946316 | orchestrator | 2025-07-12 20:19:15.946322 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-12 20:19:15.946328 | orchestrator | 2025-07-12 20:19:15.946339 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-12 20:19:15.946345 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:01.488) 0:09:06.888 ********* 2025-07-12 20:19:15.946351 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-12 20:19:15.946363 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-12 20:19:15.946369 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.946376 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-12 20:19:15.946382 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-12 20:19:15.946388 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.946394 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-12 20:19:15.946400 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-12 20:19:15.946412 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.946418 | orchestrator | 2025-07-12 20:19:15.946424 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-12 20:19:15.946431 | orchestrator | 2025-07-12 20:19:15.946437 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-12 20:19:15.946443 | orchestrator | Saturday 12 July 2025 20:19:13 +0000 (0:00:00.815) 0:09:07.703 ********* 2025-07-12 20:19:15.946449 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.946455 | orchestrator | 2025-07-12 20:19:15.946461 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-12 20:19:15.946467 | orchestrator | 2025-07-12 20:19:15.946474 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-12 20:19:15.946480 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.748) 0:09:08.452 ********* 2025-07-12 20:19:15.946486 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:15.946492 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:15.946498 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:15.946504 | orchestrator | 2025-07-12 20:19:15.946511 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:19:15.946517 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:19:15.946524 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-12 20:19:15.946530 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 20:19:15.946537 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 20:19:15.946543 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 20:19:15.946549 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 20:19:15.946556 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 20:19:15.946562 | orchestrator | 2025-07-12 20:19:15.946568 | orchestrator | 2025-07-12 20:19:15.946574 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:19:15.946580 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.469) 0:09:08.922 ********* 2025-07-12 20:19:15.946587 | orchestrator | =============================================================================== 2025-07-12 20:19:15.946593 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.47s 2025-07-12 20:19:15.946599 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.19s 2025-07-12 20:19:15.946605 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 24.58s 2025-07-12 20:19:15.946611 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.63s 2025-07-12 20:19:15.946617 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.20s 2025-07-12 20:19:15.946624 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.93s 2025-07-12 20:19:15.946630 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.09s 2025-07-12 20:19:15.946636 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.03s 2025-07-12 20:19:15.946642 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.86s 2025-07-12 20:19:15.946648 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.27s 2025-07-12 20:19:15.946659 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.02s 2025-07-12 20:19:15.946665 | orchestrator | nova : Copying over nova.conf ------------------------------------------ 12.57s 2025-07-12 20:19:15.946671 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.46s 2025-07-12 20:19:15.946677 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.17s 2025-07-12 20:19:15.946684 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.51s 2025-07-12 20:19:15.946690 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.48s 2025-07-12 20:19:15.946696 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.34s 2025-07-12 20:19:15.946706 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.25s 2025-07-12 20:19:15.946712 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.07s 2025-07-12 20:19:15.946718 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.61s 2025-07-12 20:19:15.946728 | orchestrator | 2025-07-12 20:19:15 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:15.946734 | orchestrator | 2025-07-12 20:19:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:18.989381 | orchestrator | 2025-07-12 20:19:18 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:18.989492 | orchestrator | 2025-07-12 20:19:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:22.040487 | orchestrator | 2025-07-12 20:19:22 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:22.040593 | orchestrator | 2025-07-12 20:19:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:25.078209 | orchestrator | 2025-07-12 20:19:25 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:25.078306 | orchestrator | 2025-07-12 20:19:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:28.123871 | orchestrator | 2025-07-12 20:19:28 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:28.123949 | orchestrator | 2025-07-12 20:19:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:31.171033 | orchestrator | 2025-07-12 20:19:31 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:31.171148 | orchestrator | 2025-07-12 20:19:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:34.214914 | orchestrator | 2025-07-12 20:19:34 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:34.215022 | orchestrator | 2025-07-12 20:19:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:37.259034 | orchestrator | 2025-07-12 20:19:37 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:37.259134 | orchestrator | 2025-07-12 20:19:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:40.305107 | orchestrator | 2025-07-12 20:19:40 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:40.305235 | orchestrator | 2025-07-12 20:19:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:43.348619 | orchestrator | 2025-07-12 20:19:43 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:43.348768 | orchestrator | 2025-07-12 20:19:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:46.395377 | orchestrator | 2025-07-12 20:19:46 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:46.395456 | orchestrator | 2025-07-12 20:19:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:49.446858 | orchestrator | 2025-07-12 20:19:49 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:49.448464 | orchestrator | 2025-07-12 20:19:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:52.497478 | orchestrator | 2025-07-12 20:19:52 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:52.497601 | orchestrator | 2025-07-12 20:19:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:55.539839 | orchestrator | 2025-07-12 20:19:55 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:55.539914 | orchestrator | 2025-07-12 20:19:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:58.595390 | orchestrator | 2025-07-12 20:19:58 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:19:58.595493 | orchestrator | 2025-07-12 20:19:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:01.640876 | orchestrator | 2025-07-12 20:20:01 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:01.640982 | orchestrator | 2025-07-12 20:20:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:04.688294 | orchestrator | 2025-07-12 20:20:04 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:04.688379 | orchestrator | 2025-07-12 20:20:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:07.733252 | orchestrator | 2025-07-12 20:20:07 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:07.733389 | orchestrator | 2025-07-12 20:20:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:10.780185 | orchestrator | 2025-07-12 20:20:10 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:10.780271 | orchestrator | 2025-07-12 20:20:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:13.831478 | orchestrator | 2025-07-12 20:20:13 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:13.831607 | orchestrator | 2025-07-12 20:20:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:16.864333 | orchestrator | 2025-07-12 20:20:16 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:16.864419 | orchestrator | 2025-07-12 20:20:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:19.912126 | orchestrator | 2025-07-12 20:20:19 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:19.912262 | orchestrator | 2025-07-12 20:20:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:22.958993 | orchestrator | 2025-07-12 20:20:22 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:22.959098 | orchestrator | 2025-07-12 20:20:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:26.021831 | orchestrator | 2025-07-12 20:20:26 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:26.022405 | orchestrator | 2025-07-12 20:20:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:29.066140 | orchestrator | 2025-07-12 20:20:29 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:29.066229 | orchestrator | 2025-07-12 20:20:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:32.116219 | orchestrator | 2025-07-12 20:20:32 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:32.116351 | orchestrator | 2025-07-12 20:20:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:35.163556 | orchestrator | 2025-07-12 20:20:35 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:35.163709 | orchestrator | 2025-07-12 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:38.205541 | orchestrator | 2025-07-12 20:20:38 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:38.205739 | orchestrator | 2025-07-12 20:20:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:41.260236 | orchestrator | 2025-07-12 20:20:41 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:41.260357 | orchestrator | 2025-07-12 20:20:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:44.298331 | orchestrator | 2025-07-12 20:20:44 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:44.298434 | orchestrator | 2025-07-12 20:20:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:47.336721 | orchestrator | 2025-07-12 20:20:47 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:47.336827 | orchestrator | 2025-07-12 20:20:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:50.389014 | orchestrator | 2025-07-12 20:20:50 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:50.389104 | orchestrator | 2025-07-12 20:20:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:53.436603 | orchestrator | 2025-07-12 20:20:53 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:53.436765 | orchestrator | 2025-07-12 20:20:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:56.475425 | orchestrator | 2025-07-12 20:20:56 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:56.475526 | orchestrator | 2025-07-12 20:20:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:59.523469 | orchestrator | 2025-07-12 20:20:59 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:20:59.523662 | orchestrator | 2025-07-12 20:20:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:02.559247 | orchestrator | 2025-07-12 20:21:02 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:02.559348 | orchestrator | 2025-07-12 20:21:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:05.601114 | orchestrator | 2025-07-12 20:21:05 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:05.601198 | orchestrator | 2025-07-12 20:21:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:08.646952 | orchestrator | 2025-07-12 20:21:08 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:08.647063 | orchestrator | 2025-07-12 20:21:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:11.686067 | orchestrator | 2025-07-12 20:21:11 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:11.686149 | orchestrator | 2025-07-12 20:21:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:14.729317 | orchestrator | 2025-07-12 20:21:14 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:14.729442 | orchestrator | 2025-07-12 20:21:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:17.771954 | orchestrator | 2025-07-12 20:21:17 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:17.772062 | orchestrator | 2025-07-12 20:21:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:20.818800 | orchestrator | 2025-07-12 20:21:20 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:20.820208 | orchestrator | 2025-07-12 20:21:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:23.868365 | orchestrator | 2025-07-12 20:21:23 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:23.868431 | orchestrator | 2025-07-12 20:21:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:26.909758 | orchestrator | 2025-07-12 20:21:26 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:26.909858 | orchestrator | 2025-07-12 20:21:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:29.950176 | orchestrator | 2025-07-12 20:21:29 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:29.950282 | orchestrator | 2025-07-12 20:21:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:32.999885 | orchestrator | 2025-07-12 20:21:32 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:33.000020 | orchestrator | 2025-07-12 20:21:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:36.042134 | orchestrator | 2025-07-12 20:21:36 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:36.042234 | orchestrator | 2025-07-12 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:39.109516 | orchestrator | 2025-07-12 20:21:39 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:39.109753 | orchestrator | 2025-07-12 20:21:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:42.160669 | orchestrator | 2025-07-12 20:21:42 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:42.160775 | orchestrator | 2025-07-12 20:21:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:45.209127 | orchestrator | 2025-07-12 20:21:45 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:45.209252 | orchestrator | 2025-07-12 20:21:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:48.247795 | orchestrator | 2025-07-12 20:21:48 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:48.247919 | orchestrator | 2025-07-12 20:21:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:51.303463 | orchestrator | 2025-07-12 20:21:51 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:51.303681 | orchestrator | 2025-07-12 20:21:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:54.351119 | orchestrator | 2025-07-12 20:21:54 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state STARTED 2025-07-12 20:21:54.351212 | orchestrator | 2025-07-12 20:21:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:57.401655 | orchestrator | 2025-07-12 20:21:57 | INFO  | Task 843fcd26-44aa-4bae-9a61-c28ca56c29c5 is in state SUCCESS 2025-07-12 20:21:57.402884 | orchestrator | 2025-07-12 20:21:57.402928 | orchestrator | 2025-07-12 20:21:57.402938 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:21:57.402946 | orchestrator | 2025-07-12 20:21:57.402953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:21:57.402960 | orchestrator | Saturday 12 July 2025 20:17:09 +0000 (0:00:00.260) 0:00:00.260 ********* 2025-07-12 20:21:57.402968 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.402976 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:21:57.402982 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:21:57.402989 | orchestrator | 2025-07-12 20:21:57.403021 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:21:57.403028 | orchestrator | Saturday 12 July 2025 20:17:09 +0000 (0:00:00.267) 0:00:00.527 ********* 2025-07-12 20:21:57.403049 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-12 20:21:57.403056 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-12 20:21:57.403063 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-12 20:21:57.403069 | orchestrator | 2025-07-12 20:21:57.403075 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-12 20:21:57.403081 | orchestrator | 2025-07-12 20:21:57.403087 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:21:57.403092 | orchestrator | Saturday 12 July 2025 20:17:10 +0000 (0:00:00.357) 0:00:00.885 ********* 2025-07-12 20:21:57.403099 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:21:57.403106 | orchestrator | 2025-07-12 20:21:57.403113 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-12 20:21:57.403119 | orchestrator | Saturday 12 July 2025 20:17:10 +0000 (0:00:00.545) 0:00:01.430 ********* 2025-07-12 20:21:57.403126 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-12 20:21:57.403132 | orchestrator | 2025-07-12 20:21:57.403138 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-12 20:21:57.403143 | orchestrator | Saturday 12 July 2025 20:17:13 +0000 (0:00:02.970) 0:00:04.401 ********* 2025-07-12 20:21:57.403149 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-12 20:21:57.403156 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-12 20:21:57.403162 | orchestrator | 2025-07-12 20:21:57.403168 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-12 20:21:57.403173 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:06.603) 0:00:11.005 ********* 2025-07-12 20:21:57.403179 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:21:57.403185 | orchestrator | 2025-07-12 20:21:57.403190 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-12 20:21:57.403195 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:03.339) 0:00:14.345 ********* 2025-07-12 20:21:57.403200 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:21:57.403206 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 20:21:57.403211 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 20:21:57.403217 | orchestrator | 2025-07-12 20:21:57.403223 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-12 20:21:57.403229 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:08.317) 0:00:22.662 ********* 2025-07-12 20:21:57.403234 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:21:57.403240 | orchestrator | 2025-07-12 20:21:57.403246 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-12 20:21:57.403251 | orchestrator | Saturday 12 July 2025 20:17:35 +0000 (0:00:03.325) 0:00:25.988 ********* 2025-07-12 20:21:57.403257 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 20:21:57.403263 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 20:21:57.403269 | orchestrator | 2025-07-12 20:21:57.403275 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-12 20:21:57.403281 | orchestrator | Saturday 12 July 2025 20:17:43 +0000 (0:00:08.071) 0:00:34.059 ********* 2025-07-12 20:21:57.403287 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-12 20:21:57.403292 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-12 20:21:57.403298 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-12 20:21:57.403311 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-12 20:21:57.403317 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-12 20:21:57.403322 | orchestrator | 2025-07-12 20:21:57.403328 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:21:57.403334 | orchestrator | Saturday 12 July 2025 20:17:59 +0000 (0:00:16.109) 0:00:50.169 ********* 2025-07-12 20:21:57.403340 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:21:57.403345 | orchestrator | 2025-07-12 20:21:57.403351 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-12 20:21:57.403357 | orchestrator | Saturday 12 July 2025 20:18:00 +0000 (0:00:00.506) 0:00:50.675 ********* 2025-07-12 20:21:57.403363 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.403368 | orchestrator | 2025-07-12 20:21:57.403374 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-07-12 20:21:57.403380 | orchestrator | Saturday 12 July 2025 20:18:05 +0000 (0:00:05.596) 0:00:56.272 ********* 2025-07-12 20:21:57.403386 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.403391 | orchestrator | 2025-07-12 20:21:57.403397 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-12 20:21:57.403414 | orchestrator | Saturday 12 July 2025 20:18:10 +0000 (0:00:04.386) 0:01:00.658 ********* 2025-07-12 20:21:57.403420 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.403426 | orchestrator | 2025-07-12 20:21:57.403431 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-07-12 20:21:57.403437 | orchestrator | Saturday 12 July 2025 20:18:13 +0000 (0:00:03.260) 0:01:03.918 ********* 2025-07-12 20:21:57.403443 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-12 20:21:57.403449 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-12 20:21:57.403454 | orchestrator | 2025-07-12 20:21:57.403460 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-07-12 20:21:57.403466 | orchestrator | Saturday 12 July 2025 20:18:24 +0000 (0:00:11.451) 0:01:15.370 ********* 2025-07-12 20:21:57.403473 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-07-12 20:21:57.403479 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-07-12 20:21:57.403488 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-07-12 20:21:57.403496 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-07-12 20:21:57.403502 | orchestrator | 2025-07-12 20:21:57.403508 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-07-12 20:21:57.403514 | orchestrator | Saturday 12 July 2025 20:18:41 +0000 (0:00:16.796) 0:01:32.166 ********* 2025-07-12 20:21:57.403519 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.403562 | orchestrator | 2025-07-12 20:21:57.403569 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-07-12 20:21:57.403575 | orchestrator | Saturday 12 July 2025 20:18:46 +0000 (0:00:05.117) 0:01:37.283 ********* 2025-07-12 20:21:57.403581 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.403587 | orchestrator | 2025-07-12 20:21:57.403594 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-07-12 20:21:57.403600 | orchestrator | Saturday 12 July 2025 20:18:53 +0000 (0:00:06.367) 0:01:43.650 ********* 2025-07-12 20:21:57.403606 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.403612 | orchestrator | 2025-07-12 20:21:57.403618 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-07-12 20:21:57.403625 | orchestrator | Saturday 12 July 2025 20:18:53 +0000 (0:00:00.210) 0:01:43.861 ********* 2025-07-12 20:21:57.403635 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.403641 | orchestrator | 2025-07-12 20:21:57.403648 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:21:57.403654 | orchestrator | Saturday 12 July 2025 20:18:58 +0000 (0:00:05.015) 0:01:48.877 ********* 2025-07-12 20:21:57.403660 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:21:57.404059 | orchestrator | 2025-07-12 20:21:57.404076 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-07-12 20:21:57.404083 | orchestrator | Saturday 12 July 2025 20:18:59 +0000 (0:00:01.058) 0:01:49.935 ********* 2025-07-12 20:21:57.404090 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404096 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404102 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404109 | orchestrator | 2025-07-12 20:21:57.404115 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-07-12 20:21:57.404122 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:05.630) 0:01:55.566 ********* 2025-07-12 20:21:57.404128 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404133 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404139 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404145 | orchestrator | 2025-07-12 20:21:57.404151 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-07-12 20:21:57.404158 | orchestrator | Saturday 12 July 2025 20:19:09 +0000 (0:00:04.136) 0:01:59.702 ********* 2025-07-12 20:21:57.404164 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404170 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404176 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404182 | orchestrator | 2025-07-12 20:21:57.404229 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-07-12 20:21:57.404236 | orchestrator | Saturday 12 July 2025 20:19:09 +0000 (0:00:00.716) 0:02:00.419 ********* 2025-07-12 20:21:57.404242 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404248 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:21:57.404255 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:21:57.404261 | orchestrator | 2025-07-12 20:21:57.404267 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-07-12 20:21:57.404273 | orchestrator | Saturday 12 July 2025 20:19:11 +0000 (0:00:01.918) 0:02:02.337 ********* 2025-07-12 20:21:57.404283 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404289 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404324 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404332 | orchestrator | 2025-07-12 20:21:57.404338 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-07-12 20:21:57.404344 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:01.182) 0:02:03.520 ********* 2025-07-12 20:21:57.404351 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404358 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404364 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404371 | orchestrator | 2025-07-12 20:21:57.404377 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-07-12 20:21:57.404384 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:01.116) 0:02:04.636 ********* 2025-07-12 20:21:57.404391 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404397 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404404 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404473 | orchestrator | 2025-07-12 20:21:57.404490 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-07-12 20:21:57.404496 | orchestrator | Saturday 12 July 2025 20:19:15 +0000 (0:00:01.923) 0:02:06.559 ********* 2025-07-12 20:21:57.404503 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.404508 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.404514 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.404520 | orchestrator | 2025-07-12 20:21:57.404709 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-07-12 20:21:57.404718 | orchestrator | Saturday 12 July 2025 20:19:17 +0000 (0:00:01.718) 0:02:08.278 ********* 2025-07-12 20:21:57.404724 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404730 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:21:57.404736 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:21:57.404741 | orchestrator | 2025-07-12 20:21:57.404753 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-07-12 20:21:57.404759 | orchestrator | Saturday 12 July 2025 20:19:18 +0000 (0:00:00.590) 0:02:08.869 ********* 2025-07-12 20:21:57.404765 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404771 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:21:57.404776 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:21:57.404783 | orchestrator | 2025-07-12 20:21:57.404789 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:21:57.404795 | orchestrator | Saturday 12 July 2025 20:19:20 +0000 (0:00:02.550) 0:02:11.419 ********* 2025-07-12 20:21:57.404802 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:21:57.404808 | orchestrator | 2025-07-12 20:21:57.404813 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-07-12 20:21:57.404819 | orchestrator | Saturday 12 July 2025 20:19:21 +0000 (0:00:00.724) 0:02:12.144 ********* 2025-07-12 20:21:57.404825 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404831 | orchestrator | 2025-07-12 20:21:57.404836 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-12 20:21:57.404842 | orchestrator | Saturday 12 July 2025 20:19:25 +0000 (0:00:03.795) 0:02:15.939 ********* 2025-07-12 20:21:57.404849 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404854 | orchestrator | 2025-07-12 20:21:57.404860 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-07-12 20:21:57.404866 | orchestrator | Saturday 12 July 2025 20:19:28 +0000 (0:00:03.153) 0:02:19.093 ********* 2025-07-12 20:21:57.404872 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-12 20:21:57.404878 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-12 20:21:57.404884 | orchestrator | 2025-07-12 20:21:57.404890 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-07-12 20:21:57.404895 | orchestrator | Saturday 12 July 2025 20:19:35 +0000 (0:00:06.725) 0:02:25.818 ********* 2025-07-12 20:21:57.404902 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404908 | orchestrator | 2025-07-12 20:21:57.404914 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-07-12 20:21:57.404919 | orchestrator | Saturday 12 July 2025 20:19:38 +0000 (0:00:03.467) 0:02:29.286 ********* 2025-07-12 20:21:57.404925 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:21:57.404931 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:21:57.404936 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:21:57.404942 | orchestrator | 2025-07-12 20:21:57.404948 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-07-12 20:21:57.404953 | orchestrator | Saturday 12 July 2025 20:19:39 +0000 (0:00:00.350) 0:02:29.636 ********* 2025-07-12 20:21:57.404963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.404996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.405007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.405014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.405021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.405027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.405034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405123 | orchestrator | 2025-07-12 20:21:57.405129 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-07-12 20:21:57.405136 | orchestrator | Saturday 12 July 2025 20:19:41 +0000 (0:00:02.755) 0:02:32.392 ********* 2025-07-12 20:21:57.405141 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.405147 | orchestrator | 2025-07-12 20:21:57.405166 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-07-12 20:21:57.405172 | orchestrator | Saturday 12 July 2025 20:19:42 +0000 (0:00:00.346) 0:02:32.738 ********* 2025-07-12 20:21:57.405177 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.405183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:21:57.405188 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:21:57.405194 | orchestrator | 2025-07-12 20:21:57.405200 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-07-12 20:21:57.405206 | orchestrator | Saturday 12 July 2025 20:19:42 +0000 (0:00:00.300) 0:02:33.039 ********* 2025-07-12 20:21:57.405215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405259 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.405283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405324 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:21:57.405330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405388 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:21:57.405395 | orchestrator | 2025-07-12 20:21:57.405402 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:21:57.405408 | orchestrator | Saturday 12 July 2025 20:19:43 +0000 (0:00:00.711) 0:02:33.751 ********* 2025-07-12 20:21:57.405414 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:21:57.405421 | orchestrator | 2025-07-12 20:21:57.405427 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-07-12 20:21:57.405433 | orchestrator | Saturday 12 July 2025 20:19:43 +0000 (0:00:00.506) 0:02:34.258 ********* 2025-07-12 20:21:57.405439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.405462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.405474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.405482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.405493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.405501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.405508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.405610 | orchestrator | 2025-07-12 20:21:57.405618 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-07-12 20:21:57.405625 | orchestrator | Saturday 12 July 2025 20:19:48 +0000 (0:00:05.280) 0:02:39.539 ********* 2025-07-12 20:21:57.405636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405678 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.405690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405739 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:21:57.405746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405796 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:21:57.405803 | orchestrator | 2025-07-12 20:21:57.405810 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-07-12 20:21:57.405817 | orchestrator | Saturday 12 July 2025 20:19:49 +0000 (0:00:00.657) 0:02:40.196 ********* 2025-07-12 20:21:57.405825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405879 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.405886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.405954 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:21:57.405966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:21:57.405973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:21:57.405981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:21:57.405996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:21:57.406003 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:21:57.406010 | orchestrator | 2025-07-12 20:21:57.406081 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-07-12 20:21:57.406089 | orchestrator | Saturday 12 July 2025 20:19:50 +0000 (0:00:00.850) 0:02:41.047 ********* 2025-07-12 20:21:57.406106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406169 | orchestrator | changed: [testbed-no2025-07-12 20:21:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:21:57.406182 | orchestrator | de-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406266 | orchestrator | 2025-07-12 20:21:57.406273 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-07-12 20:21:57.406281 | orchestrator | Saturday 12 July 2025 20:19:55 +0000 (0:00:05.329) 0:02:46.376 ********* 2025-07-12 20:21:57.406288 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 20:21:57.406295 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 20:21:57.406301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 20:21:57.406306 | orchestrator | 2025-07-12 20:21:57.406313 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-07-12 20:21:57.406320 | orchestrator | Saturday 12 July 2025 20:19:57 +0000 (0:00:01.658) 0:02:48.035 ********* 2025-07-12 20:21:57.406328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406425 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406476 | orchestrator | 2025-07-12 20:21:57.406483 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-07-12 20:21:57.406491 | orchestrator | Saturday 12 July 2025 20:20:14 +0000 (0:00:16.675) 0:03:04.711 ********* 2025-07-12 20:21:57.406498 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.406505 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.406513 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.406520 | orchestrator | 2025-07-12 20:21:57.406545 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-07-12 20:21:57.406552 | orchestrator | Saturday 12 July 2025 20:20:15 +0000 (0:00:01.727) 0:03:06.439 ********* 2025-07-12 20:21:57.406564 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406571 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406578 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406584 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406591 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406598 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406605 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406612 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406625 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406632 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406639 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406646 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406653 | orchestrator | 2025-07-12 20:21:57.406659 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-07-12 20:21:57.406667 | orchestrator | Saturday 12 July 2025 20:20:20 +0000 (0:00:05.165) 0:03:11.604 ********* 2025-07-12 20:21:57.406674 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406681 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406688 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406694 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406701 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406708 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406715 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406722 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406730 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406736 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406743 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406750 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406757 | orchestrator | 2025-07-12 20:21:57.406764 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-07-12 20:21:57.406776 | orchestrator | Saturday 12 July 2025 20:20:26 +0000 (0:00:05.343) 0:03:16.947 ********* 2025-07-12 20:21:57.406783 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406790 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406797 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 20:21:57.406804 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406811 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406818 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 20:21:57.406825 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406832 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406839 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 20:21:57.406846 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406852 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406859 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 20:21:57.406866 | orchestrator | 2025-07-12 20:21:57.406873 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-07-12 20:21:57.406880 | orchestrator | Saturday 12 July 2025 20:20:31 +0000 (0:00:05.277) 0:03:22.225 ********* 2025-07-12 20:21:57.406888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:21:57.406925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:21:57.406947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.406998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:21:57.407006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.407013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.407025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:21:57.407032 | orchestrator | 2025-07-12 20:21:57.407039 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:21:57.407046 | orchestrator | Saturday 12 July 2025 20:20:35 +0000 (0:00:03.759) 0:03:25.984 ********* 2025-07-12 20:21:57.407053 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:21:57.407060 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:21:57.407071 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:21:57.407077 | orchestrator | 2025-07-12 20:21:57.407083 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-07-12 20:21:57.407094 | orchestrator | Saturday 12 July 2025 20:20:35 +0000 (0:00:00.296) 0:03:26.281 ********* 2025-07-12 20:21:57.407100 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407105 | orchestrator | 2025-07-12 20:21:57.407112 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-07-12 20:21:57.407118 | orchestrator | Saturday 12 July 2025 20:20:37 +0000 (0:00:02.044) 0:03:28.326 ********* 2025-07-12 20:21:57.407126 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407132 | orchestrator | 2025-07-12 20:21:57.407139 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-07-12 20:21:57.407145 | orchestrator | Saturday 12 July 2025 20:20:40 +0000 (0:00:02.642) 0:03:30.969 ********* 2025-07-12 20:21:57.407152 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407159 | orchestrator | 2025-07-12 20:21:57.407166 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-07-12 20:21:57.407173 | orchestrator | Saturday 12 July 2025 20:20:42 +0000 (0:00:02.186) 0:03:33.155 ********* 2025-07-12 20:21:57.407180 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407187 | orchestrator | 2025-07-12 20:21:57.407194 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-07-12 20:21:57.407201 | orchestrator | Saturday 12 July 2025 20:20:44 +0000 (0:00:02.224) 0:03:35.380 ********* 2025-07-12 20:21:57.407208 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407215 | orchestrator | 2025-07-12 20:21:57.407222 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 20:21:57.407229 | orchestrator | Saturday 12 July 2025 20:21:04 +0000 (0:00:19.587) 0:03:54.967 ********* 2025-07-12 20:21:57.407235 | orchestrator | 2025-07-12 20:21:57.407242 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 20:21:57.407249 | orchestrator | Saturday 12 July 2025 20:21:04 +0000 (0:00:00.063) 0:03:55.030 ********* 2025-07-12 20:21:57.407256 | orchestrator | 2025-07-12 20:21:57.407263 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 20:21:57.407270 | orchestrator | Saturday 12 July 2025 20:21:04 +0000 (0:00:00.062) 0:03:55.093 ********* 2025-07-12 20:21:57.407277 | orchestrator | 2025-07-12 20:21:57.407284 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-07-12 20:21:57.407291 | orchestrator | Saturday 12 July 2025 20:21:04 +0000 (0:00:00.062) 0:03:55.155 ********* 2025-07-12 20:21:57.407298 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407304 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.407313 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.407322 | orchestrator | 2025-07-12 20:21:57.407329 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-07-12 20:21:57.407336 | orchestrator | Saturday 12 July 2025 20:21:20 +0000 (0:00:15.645) 0:04:10.801 ********* 2025-07-12 20:21:57.407342 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.407349 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.407356 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407363 | orchestrator | 2025-07-12 20:21:57.407370 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-07-12 20:21:57.407377 | orchestrator | Saturday 12 July 2025 20:21:28 +0000 (0:00:08.204) 0:04:19.006 ********* 2025-07-12 20:21:57.407385 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.407391 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.407399 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407406 | orchestrator | 2025-07-12 20:21:57.407413 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-07-12 20:21:57.407420 | orchestrator | Saturday 12 July 2025 20:21:37 +0000 (0:00:08.704) 0:04:27.711 ********* 2025-07-12 20:21:57.407427 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407434 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.407441 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.407448 | orchestrator | 2025-07-12 20:21:57.407461 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-07-12 20:21:57.407468 | orchestrator | Saturday 12 July 2025 20:21:47 +0000 (0:00:10.175) 0:04:37.886 ********* 2025-07-12 20:21:57.407475 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:21:57.407482 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:21:57.407489 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:21:57.407496 | orchestrator | 2025-07-12 20:21:57.407504 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:21:57.407511 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:21:57.407519 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:21:57.407541 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:21:57.407549 | orchestrator | 2025-07-12 20:21:57.407556 | orchestrator | 2025-07-12 20:21:57.407563 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:21:57.407574 | orchestrator | Saturday 12 July 2025 20:21:55 +0000 (0:00:08.399) 0:04:46.286 ********* 2025-07-12 20:21:57.407581 | orchestrator | =============================================================================== 2025-07-12 20:21:57.407588 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.59s 2025-07-12 20:21:57.407596 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.80s 2025-07-12 20:21:57.407603 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.68s 2025-07-12 20:21:57.407610 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.11s 2025-07-12 20:21:57.407621 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.65s 2025-07-12 20:21:57.407629 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.45s 2025-07-12 20:21:57.407636 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.18s 2025-07-12 20:21:57.407643 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.70s 2025-07-12 20:21:57.407650 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 8.40s 2025-07-12 20:21:57.407657 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.32s 2025-07-12 20:21:57.407664 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.20s 2025-07-12 20:21:57.407672 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.07s 2025-07-12 20:21:57.407679 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.73s 2025-07-12 20:21:57.407686 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.60s 2025-07-12 20:21:57.407694 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.37s 2025-07-12 20:21:57.407700 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.63s 2025-07-12 20:21:57.407706 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.60s 2025-07-12 20:21:57.407713 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.34s 2025-07-12 20:21:57.407719 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.33s 2025-07-12 20:21:57.407725 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.28s 2025-07-12 20:22:00.435835 | orchestrator | 2025-07-12 20:22:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:03.476786 | orchestrator | 2025-07-12 20:22:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:06.520939 | orchestrator | 2025-07-12 20:22:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:09.561767 | orchestrator | 2025-07-12 20:22:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:12.606159 | orchestrator | 2025-07-12 20:22:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:15.653319 | orchestrator | 2025-07-12 20:22:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:18.693167 | orchestrator | 2025-07-12 20:22:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:21.738240 | orchestrator | 2025-07-12 20:22:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:24.788734 | orchestrator | 2025-07-12 20:22:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:27.834193 | orchestrator | 2025-07-12 20:22:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:30.877291 | orchestrator | 2025-07-12 20:22:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:33.915768 | orchestrator | 2025-07-12 20:22:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:36.960797 | orchestrator | 2025-07-12 20:22:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:40.000162 | orchestrator | 2025-07-12 20:22:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:43.040845 | orchestrator | 2025-07-12 20:22:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:46.079970 | orchestrator | 2025-07-12 20:22:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:49.120701 | orchestrator | 2025-07-12 20:22:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:52.166101 | orchestrator | 2025-07-12 20:22:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:55.205519 | orchestrator | 2025-07-12 20:22:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:22:58.241278 | orchestrator | 2025-07-12 20:22:58.580540 | orchestrator | 2025-07-12 20:22:58.586315 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jul 12 20:22:58 UTC 2025 2025-07-12 20:22:58.586417 | orchestrator | 2025-07-12 20:22:59.085356 | orchestrator | ok: Runtime: 0:33:05.086262 2025-07-12 20:22:59.347998 | 2025-07-12 20:22:59.348154 | TASK [Bootstrap services] 2025-07-12 20:23:00.152112 | orchestrator | 2025-07-12 20:23:00.152302 | orchestrator | # BOOTSTRAP 2025-07-12 20:23:00.152324 | orchestrator | 2025-07-12 20:23:00.152338 | orchestrator | + set -e 2025-07-12 20:23:00.152353 | orchestrator | + echo 2025-07-12 20:23:00.152367 | orchestrator | + echo '# BOOTSTRAP' 2025-07-12 20:23:00.152385 | orchestrator | + echo 2025-07-12 20:23:00.152430 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-12 20:23:00.163250 | orchestrator | + set -e 2025-07-12 20:23:00.163342 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-12 20:23:04.923369 | orchestrator | 2025-07-12 20:23:04 | INFO  | It takes a moment until task 5edbed10-a9a7-4c85-b5f6-514b2bb900de (flavor-manager) has been started and output is visible here. 2025-07-12 20:23:12.646804 | orchestrator | 2025-07-12 20:23:08 | INFO  | Flavor SCS-1V-4 created 2025-07-12 20:23:12.646976 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-2V-8 created 2025-07-12 20:23:12.646999 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-4V-16 created 2025-07-12 20:23:12.647025 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-8V-32 created 2025-07-12 20:23:12.647037 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-1V-2 created 2025-07-12 20:23:12.647049 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-2V-4 created 2025-07-12 20:23:12.647060 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-4V-8 created 2025-07-12 20:23:12.647073 | orchestrator | 2025-07-12 20:23:09 | INFO  | Flavor SCS-8V-16 created 2025-07-12 20:23:12.647095 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-16V-32 created 2025-07-12 20:23:12.647107 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-1V-8 created 2025-07-12 20:23:12.647119 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-2V-16 created 2025-07-12 20:23:12.647130 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-4V-32 created 2025-07-12 20:23:12.647141 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-1L-1 created 2025-07-12 20:23:12.647152 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-2V-4-20s created 2025-07-12 20:23:12.647163 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-4V-16-100s created 2025-07-12 20:23:12.647174 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-1V-4-10 created 2025-07-12 20:23:12.647185 | orchestrator | 2025-07-12 20:23:10 | INFO  | Flavor SCS-2V-8-20 created 2025-07-12 20:23:12.647196 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-4V-16-50 created 2025-07-12 20:23:12.647207 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-8V-32-100 created 2025-07-12 20:23:12.647218 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-1V-2-5 created 2025-07-12 20:23:12.647229 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-2V-4-10 created 2025-07-12 20:23:12.647240 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-4V-8-20 created 2025-07-12 20:23:12.647251 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-8V-16-50 created 2025-07-12 20:23:12.647262 | orchestrator | 2025-07-12 20:23:11 | INFO  | Flavor SCS-16V-32-100 created 2025-07-12 20:23:12.647274 | orchestrator | 2025-07-12 20:23:12 | INFO  | Flavor SCS-1V-8-20 created 2025-07-12 20:23:12.647285 | orchestrator | 2025-07-12 20:23:12 | INFO  | Flavor SCS-2V-16-50 created 2025-07-12 20:23:12.647295 | orchestrator | 2025-07-12 20:23:12 | INFO  | Flavor SCS-4V-32-100 created 2025-07-12 20:23:12.647306 | orchestrator | 2025-07-12 20:23:12 | INFO  | Flavor SCS-1L-1-5 created 2025-07-12 20:23:14.797153 | orchestrator | 2025-07-12 20:23:14 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-12 20:23:24.939195 | orchestrator | 2025-07-12 20:23:24 | INFO  | Task 9293ac52-257e-4d57-995a-7249177640df (bootstrap-basic) was prepared for execution. 2025-07-12 20:23:24.939369 | orchestrator | 2025-07-12 20:23:24 | INFO  | It takes a moment until task 9293ac52-257e-4d57-995a-7249177640df (bootstrap-basic) has been started and output is visible here. 2025-07-12 20:24:25.225152 | orchestrator | 2025-07-12 20:24:25.225241 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-12 20:24:25.225250 | orchestrator | 2025-07-12 20:24:25.225256 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 20:24:25.225262 | orchestrator | Saturday 12 July 2025 20:23:29 +0000 (0:00:00.092) 0:00:00.092 ********* 2025-07-12 20:24:25.225268 | orchestrator | ok: [localhost] 2025-07-12 20:24:25.225274 | orchestrator | 2025-07-12 20:24:25.225280 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-12 20:24:25.225287 | orchestrator | Saturday 12 July 2025 20:23:31 +0000 (0:00:01.898) 0:00:01.990 ********* 2025-07-12 20:24:25.225292 | orchestrator | ok: [localhost] 2025-07-12 20:24:25.225297 | orchestrator | 2025-07-12 20:24:25.225302 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-12 20:24:25.225307 | orchestrator | Saturday 12 July 2025 20:23:40 +0000 (0:00:08.719) 0:00:10.710 ********* 2025-07-12 20:24:25.225313 | orchestrator | changed: [localhost] 2025-07-12 20:24:25.225318 | orchestrator | 2025-07-12 20:24:25.225323 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-12 20:24:25.225329 | orchestrator | Saturday 12 July 2025 20:23:47 +0000 (0:00:07.632) 0:00:18.343 ********* 2025-07-12 20:24:25.225334 | orchestrator | ok: [localhost] 2025-07-12 20:24:25.225339 | orchestrator | 2025-07-12 20:24:25.225345 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-12 20:24:25.225350 | orchestrator | Saturday 12 July 2025 20:23:53 +0000 (0:00:06.170) 0:00:24.514 ********* 2025-07-12 20:24:25.225355 | orchestrator | changed: [localhost] 2025-07-12 20:24:25.225364 | orchestrator | 2025-07-12 20:24:25.225411 | orchestrator | TASK [Create public network] *************************************************** 2025-07-12 20:24:25.225417 | orchestrator | Saturday 12 July 2025 20:24:01 +0000 (0:00:07.230) 0:00:31.745 ********* 2025-07-12 20:24:25.225422 | orchestrator | changed: [localhost] 2025-07-12 20:24:25.225427 | orchestrator | 2025-07-12 20:24:25.225432 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-12 20:24:25.225437 | orchestrator | Saturday 12 July 2025 20:24:06 +0000 (0:00:05.795) 0:00:37.540 ********* 2025-07-12 20:24:25.225442 | orchestrator | changed: [localhost] 2025-07-12 20:24:25.225447 | orchestrator | 2025-07-12 20:24:25.225460 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-12 20:24:25.225466 | orchestrator | Saturday 12 July 2025 20:24:13 +0000 (0:00:06.258) 0:00:43.798 ********* 2025-07-12 20:24:25.225471 | orchestrator | changed: [localhost] 2025-07-12 20:24:25.225476 | orchestrator | 2025-07-12 20:24:25.225481 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-12 20:24:25.225486 | orchestrator | Saturday 12 July 2025 20:24:17 +0000 (0:00:04.504) 0:00:48.302 ********* 2025-07-12 20:24:25.225492 | orchestrator | changed: [localhost] 2025-07-12 20:24:25.225497 | orchestrator | 2025-07-12 20:24:25.225502 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-12 20:24:25.225507 | orchestrator | Saturday 12 July 2025 20:24:21 +0000 (0:00:03.854) 0:00:52.157 ********* 2025-07-12 20:24:25.225512 | orchestrator | ok: [localhost] 2025-07-12 20:24:25.225518 | orchestrator | 2025-07-12 20:24:25.225523 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:24:25.225528 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:24:25.225535 | orchestrator | 2025-07-12 20:24:25.225540 | orchestrator | 2025-07-12 20:24:25.225545 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:24:25.225550 | orchestrator | Saturday 12 July 2025 20:24:24 +0000 (0:00:03.512) 0:00:55.669 ********* 2025-07-12 20:24:25.225572 | orchestrator | =============================================================================== 2025-07-12 20:24:25.225577 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.72s 2025-07-12 20:24:25.225583 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.63s 2025-07-12 20:24:25.225588 | orchestrator | Create volume type local ------------------------------------------------ 7.23s 2025-07-12 20:24:25.225593 | orchestrator | Set public network to default ------------------------------------------- 6.26s 2025-07-12 20:24:25.225598 | orchestrator | Get volume type local --------------------------------------------------- 6.17s 2025-07-12 20:24:25.225603 | orchestrator | Create public network --------------------------------------------------- 5.80s 2025-07-12 20:24:25.225609 | orchestrator | Create public subnet ---------------------------------------------------- 4.50s 2025-07-12 20:24:25.225614 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.85s 2025-07-12 20:24:25.225619 | orchestrator | Create manager role ----------------------------------------------------- 3.51s 2025-07-12 20:24:25.225624 | orchestrator | Gathering Facts --------------------------------------------------------- 1.90s 2025-07-12 20:24:27.568556 | orchestrator | 2025-07-12 20:24:27 | INFO  | It takes a moment until task bc6f60f3-369f-4be7-ae95-02a7aad128b2 (image-manager) has been started and output is visible here. 2025-07-12 20:25:08.827149 | orchestrator | 2025-07-12 20:24:31 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-12 20:25:08.827306 | orchestrator | 2025-07-12 20:24:31 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-12 20:25:08.827328 | orchestrator | 2025-07-12 20:24:31 | INFO  | Importing image Cirros 0.6.2 2025-07-12 20:25:08.827391 | orchestrator | 2025-07-12 20:24:31 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 20:25:08.827439 | orchestrator | 2025-07-12 20:24:33 | INFO  | Waiting for image to leave queued state... 2025-07-12 20:25:08.827452 | orchestrator | 2025-07-12 20:24:35 | INFO  | Waiting for import to complete... 2025-07-12 20:25:08.827464 | orchestrator | 2025-07-12 20:24:45 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-12 20:25:08.827475 | orchestrator | 2025-07-12 20:24:45 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-12 20:25:08.827487 | orchestrator | 2025-07-12 20:24:45 | INFO  | Setting internal_version = 0.6.2 2025-07-12 20:25:08.827498 | orchestrator | 2025-07-12 20:24:45 | INFO  | Setting image_original_user = cirros 2025-07-12 20:25:08.827511 | orchestrator | 2025-07-12 20:24:45 | INFO  | Adding tag os:cirros 2025-07-12 20:25:08.827523 | orchestrator | 2025-07-12 20:24:46 | INFO  | Setting property architecture: x86_64 2025-07-12 20:25:08.827535 | orchestrator | 2025-07-12 20:24:46 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 20:25:08.827546 | orchestrator | 2025-07-12 20:24:46 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 20:25:08.827558 | orchestrator | 2025-07-12 20:24:46 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 20:25:08.827573 | orchestrator | 2025-07-12 20:24:47 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 20:25:08.827614 | orchestrator | 2025-07-12 20:24:47 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 20:25:08.827626 | orchestrator | 2025-07-12 20:24:47 | INFO  | Setting property os_distro: cirros 2025-07-12 20:25:08.827639 | orchestrator | 2025-07-12 20:24:47 | INFO  | Setting property replace_frequency: never 2025-07-12 20:25:08.827651 | orchestrator | 2025-07-12 20:24:47 | INFO  | Setting property uuid_validity: none 2025-07-12 20:25:08.827663 | orchestrator | 2025-07-12 20:24:48 | INFO  | Setting property provided_until: none 2025-07-12 20:25:08.827701 | orchestrator | 2025-07-12 20:24:48 | INFO  | Setting property image_description: Cirros 2025-07-12 20:25:08.827724 | orchestrator | 2025-07-12 20:24:48 | INFO  | Setting property image_name: Cirros 2025-07-12 20:25:08.827736 | orchestrator | 2025-07-12 20:24:48 | INFO  | Setting property internal_version: 0.6.2 2025-07-12 20:25:08.827755 | orchestrator | 2025-07-12 20:24:48 | INFO  | Setting property image_original_user: cirros 2025-07-12 20:25:08.827768 | orchestrator | 2025-07-12 20:24:49 | INFO  | Setting property os_version: 0.6.2 2025-07-12 20:25:08.827781 | orchestrator | 2025-07-12 20:24:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 20:25:08.827795 | orchestrator | 2025-07-12 20:24:49 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-12 20:25:08.827807 | orchestrator | 2025-07-12 20:24:50 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-12 20:25:08.827837 | orchestrator | 2025-07-12 20:24:50 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-12 20:25:08.827849 | orchestrator | 2025-07-12 20:24:50 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-12 20:25:08.827862 | orchestrator | 2025-07-12 20:24:50 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-12 20:25:08.827875 | orchestrator | 2025-07-12 20:24:50 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-12 20:25:08.827887 | orchestrator | 2025-07-12 20:24:50 | INFO  | Importing image Cirros 0.6.3 2025-07-12 20:25:08.827900 | orchestrator | 2025-07-12 20:24:50 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 20:25:08.827910 | orchestrator | 2025-07-12 20:24:51 | INFO  | Waiting for image to leave queued state... 2025-07-12 20:25:08.827921 | orchestrator | 2025-07-12 20:24:53 | INFO  | Waiting for import to complete... 2025-07-12 20:25:08.827932 | orchestrator | 2025-07-12 20:25:03 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-12 20:25:08.827962 | orchestrator | 2025-07-12 20:25:04 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-12 20:25:08.827974 | orchestrator | 2025-07-12 20:25:04 | INFO  | Setting internal_version = 0.6.3 2025-07-12 20:25:08.827985 | orchestrator | 2025-07-12 20:25:04 | INFO  | Setting image_original_user = cirros 2025-07-12 20:25:08.828019 | orchestrator | 2025-07-12 20:25:04 | INFO  | Adding tag os:cirros 2025-07-12 20:25:08.828031 | orchestrator | 2025-07-12 20:25:04 | INFO  | Setting property architecture: x86_64 2025-07-12 20:25:08.828042 | orchestrator | 2025-07-12 20:25:04 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 20:25:08.828053 | orchestrator | 2025-07-12 20:25:04 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 20:25:08.828063 | orchestrator | 2025-07-12 20:25:05 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 20:25:08.828074 | orchestrator | 2025-07-12 20:25:05 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 20:25:08.828085 | orchestrator | 2025-07-12 20:25:05 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 20:25:08.828095 | orchestrator | 2025-07-12 20:25:05 | INFO  | Setting property os_distro: cirros 2025-07-12 20:25:08.828106 | orchestrator | 2025-07-12 20:25:05 | INFO  | Setting property replace_frequency: never 2025-07-12 20:25:08.828117 | orchestrator | 2025-07-12 20:25:06 | INFO  | Setting property uuid_validity: none 2025-07-12 20:25:08.828148 | orchestrator | 2025-07-12 20:25:06 | INFO  | Setting property provided_until: none 2025-07-12 20:25:08.828160 | orchestrator | 2025-07-12 20:25:06 | INFO  | Setting property image_description: Cirros 2025-07-12 20:25:08.828170 | orchestrator | 2025-07-12 20:25:06 | INFO  | Setting property image_name: Cirros 2025-07-12 20:25:08.828181 | orchestrator | 2025-07-12 20:25:06 | INFO  | Setting property internal_version: 0.6.3 2025-07-12 20:25:08.828191 | orchestrator | 2025-07-12 20:25:07 | INFO  | Setting property image_original_user: cirros 2025-07-12 20:25:08.828202 | orchestrator | 2025-07-12 20:25:07 | INFO  | Setting property os_version: 0.6.3 2025-07-12 20:25:08.828213 | orchestrator | 2025-07-12 20:25:07 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 20:25:08.828223 | orchestrator | 2025-07-12 20:25:07 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-12 20:25:08.828251 | orchestrator | 2025-07-12 20:25:08 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-12 20:25:08.828262 | orchestrator | 2025-07-12 20:25:08 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-12 20:25:08.828278 | orchestrator | 2025-07-12 20:25:08 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-12 20:25:09.171227 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-12 20:25:11.267659 | orchestrator | 2025-07-12 20:25:11 | INFO  | date: 2025-07-12 2025-07-12 20:25:11.267785 | orchestrator | 2025-07-12 20:25:11 | INFO  | image: octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:25:11.267805 | orchestrator | 2025-07-12 20:25:11 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:25:11.267840 | orchestrator | 2025-07-12 20:25:11 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2.CHECKSUM 2025-07-12 20:25:11.305147 | orchestrator | 2025-07-12 20:25:11 | INFO  | checksum: c95855ae58dddb977df0d8e11b851fc66dd0abac9e608812e6020c0a95df8f26 2025-07-12 20:25:11.383655 | orchestrator | 2025-07-12 20:25:11 | INFO  | It takes a moment until task f4b7f9f0-3904-4ea8-84dc-a3f0b354ff88 (image-manager) has been started and output is visible here. 2025-07-12 20:26:11.484926 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-12 20:26:11.485110 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-12 20:26:11.485129 | orchestrator | 2025-07-12 20:25:13 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:26:11.485160 | orchestrator | 2025-07-12 20:25:13 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2: 200 2025-07-12 20:26:11.485175 | orchestrator | 2025-07-12 20:25:13 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-12 2025-07-12 20:26:11.485187 | orchestrator | 2025-07-12 20:25:13 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:26:11.485199 | orchestrator | 2025-07-12 20:25:13 | INFO  | Waiting for image to leave queued state... 2025-07-12 20:26:11.485241 | orchestrator | 2025-07-12 20:25:15 | INFO  | Waiting for import to complete... 2025-07-12 20:26:11.485253 | orchestrator | 2025-07-12 20:25:25 | INFO  | Waiting for import to complete... 2025-07-12 20:26:11.485264 | orchestrator | 2025-07-12 20:25:35 | INFO  | Waiting for import to complete... 2025-07-12 20:26:11.485275 | orchestrator | 2025-07-12 20:25:45 | INFO  | Waiting for import to complete... 2025-07-12 20:26:11.485285 | orchestrator | 2025-07-12 20:25:56 | INFO  | Waiting for import to complete... 2025-07-12 20:26:11.485341 | orchestrator | Failure: Unable to establish connection to https://api.testbed.osism.xyz:9292/v2/images/47f2c6b1-6c49-404d-a2f6-7f2bd0122d28: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')). Retrying in 0.5s.1 retries left 2025-07-12 20:26:11.485354 | orchestrator | 2025-07-12 20:26:06 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-12' successfully completed, reloading images 2025-07-12 20:26:11.485366 | orchestrator | 2025-07-12 20:26:07 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:26:11.485377 | orchestrator | 2025-07-12 20:26:07 | INFO  | Setting internal_version = 2025-07-12 2025-07-12 20:26:11.485388 | orchestrator | 2025-07-12 20:26:07 | INFO  | Setting image_original_user = ubuntu 2025-07-12 20:26:11.485399 | orchestrator | 2025-07-12 20:26:07 | INFO  | Adding tag amphora 2025-07-12 20:26:11.485410 | orchestrator | 2025-07-12 20:26:07 | INFO  | Adding tag os:ubuntu 2025-07-12 20:26:11.485431 | orchestrator | 2025-07-12 20:26:07 | INFO  | Setting property architecture: x86_64 2025-07-12 20:26:11.485442 | orchestrator | 2025-07-12 20:26:07 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 20:26:11.485452 | orchestrator | 2025-07-12 20:26:07 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 20:26:11.485463 | orchestrator | 2025-07-12 20:26:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 20:26:11.485474 | orchestrator | 2025-07-12 20:26:08 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 20:26:11.485485 | orchestrator | 2025-07-12 20:26:08 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 20:26:11.485496 | orchestrator | 2025-07-12 20:26:08 | INFO  | Setting property os_distro: ubuntu 2025-07-12 20:26:11.485506 | orchestrator | 2025-07-12 20:26:08 | INFO  | Setting property replace_frequency: quarterly 2025-07-12 20:26:11.485517 | orchestrator | 2025-07-12 20:26:09 | INFO  | Setting property uuid_validity: last-1 2025-07-12 20:26:11.485527 | orchestrator | 2025-07-12 20:26:09 | INFO  | Setting property provided_until: none 2025-07-12 20:26:11.485538 | orchestrator | 2025-07-12 20:26:09 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-12 20:26:11.485549 | orchestrator | 2025-07-12 20:26:09 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-12 20:26:11.485560 | orchestrator | 2025-07-12 20:26:10 | INFO  | Setting property internal_version: 2025-07-12 2025-07-12 20:26:11.485571 | orchestrator | 2025-07-12 20:26:10 | INFO  | Setting property image_original_user: ubuntu 2025-07-12 20:26:11.485581 | orchestrator | 2025-07-12 20:26:10 | INFO  | Setting property os_version: 2025-07-12 2025-07-12 20:26:11.485619 | orchestrator | 2025-07-12 20:26:10 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:26:11.485631 | orchestrator | 2025-07-12 20:26:10 | INFO  | Setting property image_build_date: 2025-07-12 2025-07-12 20:26:11.485651 | orchestrator | 2025-07-12 20:26:11 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:26:11.485662 | orchestrator | 2025-07-12 20:26:11 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:26:11.485673 | orchestrator | 2025-07-12 20:26:11 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-12 20:26:11.485684 | orchestrator | 2025-07-12 20:26:11 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-12 20:26:11.485696 | orchestrator | 2025-07-12 20:26:11 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-12 20:26:11.485706 | orchestrator | 2025-07-12 20:26:11 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-12 20:26:12.026583 | orchestrator | ok: Runtime: 0:03:12.087240 2025-07-12 20:26:12.045705 | 2025-07-12 20:26:12.045910 | TASK [Run checks] 2025-07-12 20:26:12.701665 | orchestrator | + set -e 2025-07-12 20:26:12.701842 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:26:12.701866 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:26:12.701887 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:26:12.701900 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:26:12.701913 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:26:12.701926 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 20:26:12.702859 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 20:26:12.708883 | orchestrator | 2025-07-12 20:26:12.708953 | orchestrator | # CHECK 2025-07-12 20:26:12.708974 | orchestrator | 2025-07-12 20:26:12.708987 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 20:26:12.709004 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 20:26:12.709016 | orchestrator | + echo 2025-07-12 20:26:12.709027 | orchestrator | + echo '# CHECK' 2025-07-12 20:26:12.709037 | orchestrator | + echo 2025-07-12 20:26:12.709058 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:26:12.709992 | orchestrator | ++ semver latest 5.0.0 2025-07-12 20:26:12.771012 | orchestrator | 2025-07-12 20:26:12.771098 | orchestrator | ## Containers @ testbed-manager 2025-07-12 20:26:12.771116 | orchestrator | 2025-07-12 20:26:12.771131 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 20:26:12.771145 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 20:26:12.771159 | orchestrator | + echo 2025-07-12 20:26:12.771173 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-12 20:26:12.771187 | orchestrator | + echo 2025-07-12 20:26:12.771200 | orchestrator | + osism container testbed-manager ps 2025-07-12 20:26:15.042035 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:26:15.042856 | orchestrator | f604cd87d19a registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-07-12 20:26:15.042881 | orchestrator | 58c93f5ee7b7 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_alertmanager 2025-07-12 20:26:15.042901 | orchestrator | e92e663edc4d registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 20:26:15.042907 | orchestrator | ef11e49cc4c1 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 20:26:15.042914 | orchestrator | 5df710942926 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-07-12 20:26:15.043662 | orchestrator | 358d21aa2933 registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-07-12 20:26:15.043696 | orchestrator | e8322704e1bd registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 20:26:15.043704 | orchestrator | e8edf135772d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 20:26:15.043712 | orchestrator | f75d9ce09a22 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 20:26:15.043737 | orchestrator | ff27f368fa81 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 30 minutes openstackclient 2025-07-12 20:26:15.043745 | orchestrator | caf08c361f57 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-07-12 20:26:15.043752 | orchestrator | 71fc3c1aceab registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-12 20:26:15.043759 | orchestrator | ed8903de5f6d registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 56 minutes ago Up 36 minutes (healthy) manager-inventory_reconciler-1 2025-07-12 20:26:15.043767 | orchestrator | 8d1d4a8af73a registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) osism-ansible 2025-07-12 20:26:15.043784 | orchestrator | b25109b63a7e registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) ceph-ansible 2025-07-12 20:26:15.043818 | orchestrator | f361ee457567 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) osism-kubernetes 2025-07-12 20:26:15.043833 | orchestrator | 1b3c8c50e17a registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 56 minutes ago Up 37 minutes (healthy) kolla-ansible 2025-07-12 20:26:15.043842 | orchestrator | 695372eec1df registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-12 20:26:15.043850 | orchestrator | 41ea843f3493 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 56 minutes ago Up 37 minutes (healthy) osismclient 2025-07-12 20:26:15.043857 | orchestrator | 1179b6c6a7bf registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-12 20:26:15.043865 | orchestrator | 84b2a7977805 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-listener-1 2025-07-12 20:26:15.043873 | orchestrator | a93cfbcfea84 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 56 minutes ago Up 37 minutes (healthy) 6379/tcp manager-redis-1 2025-07-12 20:26:15.043881 | orchestrator | 9b06b144109d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-flower-1 2025-07-12 20:26:15.043888 | orchestrator | 60fbd8e1f9c7 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-openstack-1 2025-07-12 20:26:15.043901 | orchestrator | c7eadb12462d registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 56 minutes ago Up 37 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-12 20:26:15.043909 | orchestrator | 8da7bf8ec2c3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 56 minutes ago Up 37 minutes (healthy) manager-beat-1 2025-07-12 20:26:15.043917 | orchestrator | 717903851f5b registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-12 20:26:15.399660 | orchestrator | 2025-07-12 20:26:15.399711 | orchestrator | ## Images @ testbed-manager 2025-07-12 20:26:15.399718 | orchestrator | 2025-07-12 20:26:15.399723 | orchestrator | + echo 2025-07-12 20:26:15.399738 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-12 20:26:15.399743 | orchestrator | + echo 2025-07-12 20:26:15.399748 | orchestrator | + osism container testbed-manager images 2025-07-12 20:26:17.684016 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:26:17.684182 | orchestrator | registry.osism.tech/osism/osism-ansible latest 1ab605c61d0a 8 hours ago 575MB 2025-07-12 20:26:17.684216 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d2fcb41febbc 17 hours ago 11.5MB 2025-07-12 20:26:17.684228 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 751f5a3be689 17 hours ago 234MB 2025-07-12 20:26:17.684239 | orchestrator | registry.osism.tech/osism/cephclient reef 6e86f0318c12 17 hours ago 453MB 2025-07-12 20:26:17.684249 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 19 hours ago 318MB 2025-07-12 20:26:17.684258 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 19 hours ago 746MB 2025-07-12 20:26:17.684322 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 19 hours ago 628MB 2025-07-12 20:26:17.684358 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 19 hours ago 410MB 2025-07-12 20:26:17.684369 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 19 hours ago 358MB 2025-07-12 20:26:17.684378 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 e582fc7c3e8e 19 hours ago 891MB 2025-07-12 20:26:17.684388 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 6a16161bc0ba 19 hours ago 456MB 2025-07-12 20:26:17.684398 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 8f186821a09b 19 hours ago 360MB 2025-07-12 20:26:17.684407 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 30b94beeef83 20 hours ago 535MB 2025-07-12 20:26:17.684417 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest db2d89ab0928 20 hours ago 1.21GB 2025-07-12 20:26:17.684427 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 8829472f7c53 20 hours ago 571MB 2025-07-12 20:26:17.684436 | orchestrator | registry.osism.tech/osism/osism latest c4671b5d05ab 20 hours ago 311MB 2025-07-12 20:26:17.684447 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest c8091be898ad 20 hours ago 308MB 2025-07-12 20:26:17.684456 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 weeks ago 226MB 2025-07-12 20:26:17.684466 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 4 weeks ago 329MB 2025-07-12 20:26:17.684557 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 6 weeks ago 41.4MB 2025-07-12 20:26:17.684572 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-12 20:26:17.684582 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-12 20:26:17.684592 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-12 20:26:18.007255 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:26:18.007449 | orchestrator | ++ semver latest 5.0.0 2025-07-12 20:26:18.071705 | orchestrator | 2025-07-12 20:26:18.071818 | orchestrator | ## Containers @ testbed-node-0 2025-07-12 20:26:18.071835 | orchestrator | 2025-07-12 20:26:18.071848 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 20:26:18.071860 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 20:26:18.071871 | orchestrator | + echo 2025-07-12 20:26:18.071883 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-12 20:26:18.071894 | orchestrator | + echo 2025-07-12 20:26:18.071905 | orchestrator | + osism container testbed-node-0 ps 2025-07-12 20:26:20.425308 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:26:20.425446 | orchestrator | 6bf9b764ebce registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 20:26:20.425459 | orchestrator | df0c77ff16e5 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 20:26:20.425466 | orchestrator | ee3f3cd32a1c registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 20:26:20.425472 | orchestrator | 4f8a0204dc3e registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-12 20:26:20.425479 | orchestrator | 1a85bf10395f registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 20:26:20.425485 | orchestrator | 3a41d491cd55 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-12 20:26:20.425492 | orchestrator | eecb0c631fd7 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-07-12 20:26:20.425498 | orchestrator | b9b829f28640 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-12 20:26:20.425533 | orchestrator | 654f55af4f24 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 20:26:20.425540 | orchestrator | 7d0d49340c4a registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-12 20:26:20.425546 | orchestrator | 30e1a9135289 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-12 20:26:20.425553 | orchestrator | f14ccd1b8782 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-07-12 20:26:20.425949 | orchestrator | 23fe8d9b474e registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-12 20:26:20.425974 | orchestrator | b05e9f0fcd31 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 20:26:20.426061 | orchestrator | bbeae53a7a57 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 20:26:20.426085 | orchestrator | 70578590580e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 20:26:20.426104 | orchestrator | 28a4f70cdaf1 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-12 20:26:20.426124 | orchestrator | ca1c2bd34189 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-12 20:26:20.426143 | orchestrator | c477e8043cd6 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-12 20:26:20.426163 | orchestrator | 64ba22705f84 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-12 20:26:20.426180 | orchestrator | 0b1fe069ec9c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-12 20:26:20.426191 | orchestrator | 7f4d95a02931 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes (healthy) nova_api 2025-07-12 20:26:20.426202 | orchestrator | 2e54c4ad6f5f registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 20:26:20.426213 | orchestrator | 3a9b3bd1f6c2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-12 20:26:20.426224 | orchestrator | c762b11099c4 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-07-12 20:26:20.426235 | orchestrator | 911c9b7700d1 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 20:26:20.426245 | orchestrator | eef822304758 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-12 20:26:20.426280 | orchestrator | 68325eebe171 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-12 20:26:20.426316 | orchestrator | 357006b984d4 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-12 20:26:20.426339 | orchestrator | 1bf15ef988a2 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-12 20:26:20.426357 | orchestrator | 53ace5fc23d5 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 20:26:20.426377 | orchestrator | ee03c4c20cb9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-07-12 20:26:20.426395 | orchestrator | 8670e70980f0 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-12 20:26:20.426426 | orchestrator | f62412c68790 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-12 20:26:20.426457 | orchestrator | 92116373ebd1 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-07-12 20:26:20.426469 | orchestrator | 07d54ec94a41 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-07-12 20:26:20.426480 | orchestrator | dbb8db7f3d14 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-07-12 20:26:20.426491 | orchestrator | cf52d9b7e5d0 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 20:26:20.426501 | orchestrator | 7e23cf8d0afa registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-12 20:26:20.426514 | orchestrator | 0c431202f890 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-07-12 20:26:20.426533 | orchestrator | bd33f05e873e registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-12 20:26:20.426550 | orchestrator | 23dd7c15c9f8 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-07-12 20:26:20.426569 | orchestrator | 74ad85486a59 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-12 20:26:20.426589 | orchestrator | c87dd7ce6f50 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-12 20:26:20.426608 | orchestrator | dc714dea282f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-07-12 20:26:20.426627 | orchestrator | 38ad74d1bc9e registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-07-12 20:26:20.426638 | orchestrator | a9e69c6fd71d registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-07-12 20:26:20.426648 | orchestrator | 5516968339c6 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-07-12 20:26:20.426659 | orchestrator | 41dc5e72750b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-07-12 20:26:20.426670 | orchestrator | ed9b6db35c6e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-12 20:26:20.426683 | orchestrator | a43954b0a13c registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-12 20:26:20.426701 | orchestrator | 6d67324f8545 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-12 20:26:20.426726 | orchestrator | 4fcdec7d89db registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-12 20:26:20.426781 | orchestrator | a5473069507a registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-12 20:26:20.426796 | orchestrator | 24a39cb6c39a registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 20:26:20.426807 | orchestrator | 3ec6ca1c5d7a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 20:26:20.426818 | orchestrator | 552fce4079df registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 20:26:20.715973 | orchestrator | 2025-07-12 20:26:20.716138 | orchestrator | ## Images @ testbed-node-0 2025-07-12 20:26:20.716155 | orchestrator | 2025-07-12 20:26:20.716167 | orchestrator | + echo 2025-07-12 20:26:20.716178 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-12 20:26:20.716190 | orchestrator | + echo 2025-07-12 20:26:20.716201 | orchestrator | + osism container testbed-node-0 images 2025-07-12 20:26:22.753721 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:26:22.753853 | orchestrator | registry.osism.tech/osism/ceph-daemon reef fe9c699108e1 17 hours ago 1.27GB 2025-07-12 20:26:22.753870 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 19 hours ago 318MB 2025-07-12 20:26:22.754709 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 da9bab98f1c4 19 hours ago 1.01GB 2025-07-12 20:26:22.754789 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f19504b04274 19 hours ago 318MB 2025-07-12 20:26:22.754812 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea215f3799eb 19 hours ago 375MB 2025-07-12 20:26:22.754832 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 19 hours ago 746MB 2025-07-12 20:26:22.754850 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 db9179df457c 19 hours ago 417MB 2025-07-12 20:26:22.754869 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 19 hours ago 628MB 2025-07-12 20:26:22.754887 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ee2aea4ecbb 19 hours ago 329MB 2025-07-12 20:26:22.754905 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ec7afc7181a3 19 hours ago 326MB 2025-07-12 20:26:22.754924 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9a6d9feb60b1 19 hours ago 1.55GB 2025-07-12 20:26:22.754942 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b14bb9ff6f80 19 hours ago 1.59GB 2025-07-12 20:26:22.754963 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 19 hours ago 410MB 2025-07-12 20:26:22.754981 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 aad3a3158749 19 hours ago 353MB 2025-07-12 20:26:22.754998 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 19 hours ago 358MB 2025-07-12 20:26:22.755040 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e89c3afadc38 19 hours ago 344MB 2025-07-12 20:26:22.755062 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2cebeabcbd0e 19 hours ago 351MB 2025-07-12 20:26:22.755080 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 adada41a764e 19 hours ago 1.21GB 2025-07-12 20:26:22.755098 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 15e39d968d77 19 hours ago 361MB 2025-07-12 20:26:22.755116 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 abe28dfb5ccc 19 hours ago 361MB 2025-07-12 20:26:22.755201 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 e8b0ed492d0f 19 hours ago 324MB 2025-07-12 20:26:22.755223 | orchestrator | registry.osism.tech/kolla/redis 2024.2 82d7de98b313 19 hours ago 324MB 2025-07-12 20:26:22.755243 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29b0dc955a2b 19 hours ago 590MB 2025-07-12 20:26:22.755264 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6ad384c8beaf 19 hours ago 1.04GB 2025-07-12 20:26:22.755319 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 95944a9fdd62 19 hours ago 1.05GB 2025-07-12 20:26:22.755339 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 52bc7fc0663b 19 hours ago 1.06GB 2025-07-12 20:26:22.755358 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0e5d94078a38 19 hours ago 1.05GB 2025-07-12 20:26:22.755377 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d803f5dcba2b 19 hours ago 1.06GB 2025-07-12 20:26:22.755395 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 7bd70fa2eaca 19 hours ago 1.05GB 2025-07-12 20:26:22.755412 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 47afb51ae8f8 19 hours ago 1.05GB 2025-07-12 20:26:22.755430 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f8a3d90ad64b 19 hours ago 1.1GB 2025-07-12 20:26:22.755449 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 1a864f84d2f1 19 hours ago 1.1GB 2025-07-12 20:26:22.755463 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 7b97136c8365 19 hours ago 1.1GB 2025-07-12 20:26:22.755474 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 6afb7ebf1f84 19 hours ago 1.12GB 2025-07-12 20:26:22.755485 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 270474f08bd9 19 hours ago 1.12GB 2025-07-12 20:26:22.755496 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ea10afd51d8e 19 hours ago 1.24GB 2025-07-12 20:26:22.755531 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 a80373d5f022 19 hours ago 1.31GB 2025-07-12 20:26:22.755543 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 373788c4de01 19 hours ago 1.2GB 2025-07-12 20:26:22.755554 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 e76aee078f81 19 hours ago 1.11GB 2025-07-12 20:26:22.755564 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 b7f54fc3ae64 19 hours ago 1.13GB 2025-07-12 20:26:22.755575 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 24e61d9295a6 19 hours ago 1.11GB 2025-07-12 20:26:22.755585 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3e1a1d846e00 19 hours ago 1.29GB 2025-07-12 20:26:22.755597 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 97653d20b217 19 hours ago 1.42GB 2025-07-12 20:26:22.755607 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 06efaffd4461 19 hours ago 1.29GB 2025-07-12 20:26:22.755618 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 344e73ee870a 19 hours ago 1.29GB 2025-07-12 20:26:22.755628 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 896a5e1d5e1a 19 hours ago 1.11GB 2025-07-12 20:26:22.755639 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 915a079aa111 19 hours ago 1.11GB 2025-07-12 20:26:22.755649 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 d9c02e5ae275 19 hours ago 1.04GB 2025-07-12 20:26:22.755659 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 1d72d0a0f668 19 hours ago 1.04GB 2025-07-12 20:26:22.755670 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ed5bf0762532 19 hours ago 1.06GB 2025-07-12 20:26:22.755690 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 3528d69772e2 19 hours ago 1.06GB 2025-07-12 20:26:22.755701 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 55b4043ace1e 19 hours ago 1.06GB 2025-07-12 20:26:22.755712 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 a7f0a5d9b28c 19 hours ago 1.15GB 2025-07-12 20:26:22.755722 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 820d96fc6871 19 hours ago 1.41GB 2025-07-12 20:26:22.755732 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9e62aa5265cd 19 hours ago 1.41GB 2025-07-12 20:26:22.755744 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 a98dd1df23f2 19 hours ago 1.04GB 2025-07-12 20:26:22.755780 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 1a25044bfbed 19 hours ago 1.04GB 2025-07-12 20:26:22.755815 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 59af0b95f004 19 hours ago 1.04GB 2025-07-12 20:26:22.755839 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 6a2f93c44023 19 hours ago 1.04GB 2025-07-12 20:26:22.755859 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 6e5bcb7465c5 19 hours ago 946MB 2025-07-12 20:26:22.755873 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c088724e55ba 19 hours ago 946MB 2025-07-12 20:26:22.755884 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b96ae4c576bd 19 hours ago 947MB 2025-07-12 20:26:22.755895 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4b91bbc5fcc8 19 hours ago 947MB 2025-07-12 20:26:22.963774 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:26:22.964517 | orchestrator | ++ semver latest 5.0.0 2025-07-12 20:26:23.012680 | orchestrator | 2025-07-12 20:26:23.012782 | orchestrator | ## Containers @ testbed-node-1 2025-07-12 20:26:23.012805 | orchestrator | 2025-07-12 20:26:23.012821 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 20:26:23.012838 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 20:26:23.012854 | orchestrator | + echo 2025-07-12 20:26:23.012872 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-12 20:26:23.012889 | orchestrator | + echo 2025-07-12 20:26:23.012905 | orchestrator | + osism container testbed-node-1 ps 2025-07-12 20:26:25.085323 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:26:25.085410 | orchestrator | 26c1ad8e1b8e registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 20:26:25.085425 | orchestrator | 6cdff68485fa registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 20:26:25.085436 | orchestrator | a9265c897ef5 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 20:26:25.085459 | orchestrator | 33c9e0669c84 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 4 minutes octavia_driver_agent 2025-07-12 20:26:25.085479 | orchestrator | 60dd541d47cb registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 20:26:25.085498 | orchestrator | 0e65dc43ab8a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-12 20:26:25.085517 | orchestrator | f965465f5c74 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-12 20:26:25.085546 | orchestrator | 3e5cf1795d61 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-07-12 20:26:25.085556 | orchestrator | 32a2256bd5c4 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 20:26:25.085565 | orchestrator | 6d23f426eca1 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-12 20:26:25.085575 | orchestrator | 40b58aa79dbd registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-12 20:26:25.085585 | orchestrator | 1bb98e66f7ce registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-12 20:26:25.085594 | orchestrator | 6e426da4ee00 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-07-12 20:26:25.085604 | orchestrator | c79ad7c1d7e9 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) neutron_server 2025-07-12 20:26:25.085613 | orchestrator | fe31374c1086 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 20:26:25.085623 | orchestrator | 14a96b7839fc registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 20:26:25.085637 | orchestrator | b99c3f32435f registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 20:26:25.085647 | orchestrator | 6f208737dce2 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-12 20:26:25.085657 | orchestrator | 5e5170cf5f60 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-12 20:26:25.085666 | orchestrator | 9ea47eeb9ed6 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-12 20:26:25.085676 | orchestrator | 89076ff3550d registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-12 20:26:25.085702 | orchestrator | 4ccab974aac7 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-07-12 20:26:25.085712 | orchestrator | b50cb3566d1d registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 20:26:25.085722 | orchestrator | 8229e423ac50 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-12 20:26:25.085737 | orchestrator | 5410c29b962a registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-07-12 20:26:25.085746 | orchestrator | 0e9ecc37f3d1 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 20:26:25.085756 | orchestrator | 13b0fb1e7917 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-12 20:26:25.085772 | orchestrator | 4422adb54a67 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-12 20:26:25.085782 | orchestrator | 09e39e1e5416 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-12 20:26:25.085791 | orchestrator | ee66464af500 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-12 20:26:25.085801 | orchestrator | ff05181c51e2 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-07-12 20:26:25.085812 | orchestrator | 7fea35393106 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-07-12 20:26:25.085823 | orchestrator | cd4587213666 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-12 20:26:25.085835 | orchestrator | 218bd13a06fe registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-07-12 20:26:25.085846 | orchestrator | eef20d26199c registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-12 20:26:25.085857 | orchestrator | 78f919fdbef1 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-07-12 20:26:25.085868 | orchestrator | 5c4188b8922f registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-07-12 20:26:25.085879 | orchestrator | 189546c431ba registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 20:26:25.085890 | orchestrator | 89c7cf11295d registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-07-12 20:26:25.085902 | orchestrator | 00af543c44e5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-07-12 20:26:25.085913 | orchestrator | fa70720b393f registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-12 20:26:25.085925 | orchestrator | 5c77743d0249 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-07-12 20:26:25.085936 | orchestrator | a85aae6cfdeb registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-12 20:26:25.085947 | orchestrator | c25c37720826 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-07-12 20:26:25.085963 | orchestrator | 0bacb2e5667b registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-07-12 20:26:25.085976 | orchestrator | 6ecd2f004beb registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-07-12 20:26:25.085993 | orchestrator | c9f3ae10b607 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-12 20:26:25.086005 | orchestrator | 457094ddcd92 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-07-12 20:26:25.086102 | orchestrator | 84bd2201c0a5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-07-12 20:26:25.086115 | orchestrator | c79cd704fbbf registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-12 20:26:25.086126 | orchestrator | b7efbab22ee4 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-12 20:26:25.086137 | orchestrator | f3b02f99a52c registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-12 20:26:25.086153 | orchestrator | eb5fdda095f3 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-12 20:26:25.086164 | orchestrator | 774d14f45865 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-12 20:26:25.086175 | orchestrator | 23f9d7059fd7 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 20:26:25.086184 | orchestrator | 7a1d4140e123 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 20:26:25.086194 | orchestrator | 28c366845e3e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 20:26:25.390461 | orchestrator | 2025-07-12 20:26:25.390520 | orchestrator | ## Images @ testbed-node-1 2025-07-12 20:26:25.390529 | orchestrator | 2025-07-12 20:26:25.390536 | orchestrator | + echo 2025-07-12 20:26:25.390543 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-12 20:26:25.390550 | orchestrator | + echo 2025-07-12 20:26:25.390556 | orchestrator | + osism container testbed-node-1 images 2025-07-12 20:26:27.440890 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:26:27.440984 | orchestrator | registry.osism.tech/osism/ceph-daemon reef fe9c699108e1 17 hours ago 1.27GB 2025-07-12 20:26:27.440998 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 da9bab98f1c4 19 hours ago 1.01GB 2025-07-12 20:26:27.441010 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 19 hours ago 318MB 2025-07-12 20:26:27.441020 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f19504b04274 19 hours ago 318MB 2025-07-12 20:26:27.441031 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea215f3799eb 19 hours ago 375MB 2025-07-12 20:26:27.441041 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 19 hours ago 746MB 2025-07-12 20:26:27.441052 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 db9179df457c 19 hours ago 417MB 2025-07-12 20:26:27.441062 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 19 hours ago 628MB 2025-07-12 20:26:27.441073 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ee2aea4ecbb 19 hours ago 329MB 2025-07-12 20:26:27.441084 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ec7afc7181a3 19 hours ago 326MB 2025-07-12 20:26:27.441118 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9a6d9feb60b1 19 hours ago 1.55GB 2025-07-12 20:26:27.441130 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b14bb9ff6f80 19 hours ago 1.59GB 2025-07-12 20:26:27.441140 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 aad3a3158749 19 hours ago 353MB 2025-07-12 20:26:27.441151 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 19 hours ago 410MB 2025-07-12 20:26:27.441161 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 19 hours ago 358MB 2025-07-12 20:26:27.441172 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e89c3afadc38 19 hours ago 344MB 2025-07-12 20:26:27.441182 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2cebeabcbd0e 19 hours ago 351MB 2025-07-12 20:26:27.441193 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 adada41a764e 19 hours ago 1.21GB 2025-07-12 20:26:27.441203 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 15e39d968d77 19 hours ago 361MB 2025-07-12 20:26:27.441213 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 abe28dfb5ccc 19 hours ago 361MB 2025-07-12 20:26:27.441224 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 e8b0ed492d0f 19 hours ago 324MB 2025-07-12 20:26:27.441234 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29b0dc955a2b 19 hours ago 590MB 2025-07-12 20:26:27.441245 | orchestrator | registry.osism.tech/kolla/redis 2024.2 82d7de98b313 19 hours ago 324MB 2025-07-12 20:26:27.441255 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6ad384c8beaf 19 hours ago 1.04GB 2025-07-12 20:26:27.441265 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 95944a9fdd62 19 hours ago 1.05GB 2025-07-12 20:26:27.441276 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 52bc7fc0663b 19 hours ago 1.06GB 2025-07-12 20:26:27.441319 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0e5d94078a38 19 hours ago 1.05GB 2025-07-12 20:26:27.441331 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d803f5dcba2b 19 hours ago 1.06GB 2025-07-12 20:26:27.441395 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 7bd70fa2eaca 19 hours ago 1.05GB 2025-07-12 20:26:27.441408 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 47afb51ae8f8 19 hours ago 1.05GB 2025-07-12 20:26:27.441419 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f8a3d90ad64b 19 hours ago 1.1GB 2025-07-12 20:26:27.441430 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 1a864f84d2f1 19 hours ago 1.1GB 2025-07-12 20:26:27.441441 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 7b97136c8365 19 hours ago 1.1GB 2025-07-12 20:26:27.441453 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 6afb7ebf1f84 19 hours ago 1.12GB 2025-07-12 20:26:27.441465 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 270474f08bd9 19 hours ago 1.12GB 2025-07-12 20:26:27.441477 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ea10afd51d8e 19 hours ago 1.24GB 2025-07-12 20:26:27.441507 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 a80373d5f022 19 hours ago 1.31GB 2025-07-12 20:26:27.441520 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 373788c4de01 19 hours ago 1.2GB 2025-07-12 20:26:27.441532 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 e76aee078f81 19 hours ago 1.11GB 2025-07-12 20:26:27.441545 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 b7f54fc3ae64 19 hours ago 1.13GB 2025-07-12 20:26:27.441579 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 24e61d9295a6 19 hours ago 1.11GB 2025-07-12 20:26:27.441592 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3e1a1d846e00 19 hours ago 1.29GB 2025-07-12 20:26:27.441605 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 97653d20b217 19 hours ago 1.42GB 2025-07-12 20:26:27.441617 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 06efaffd4461 19 hours ago 1.29GB 2025-07-12 20:26:27.441629 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 344e73ee870a 19 hours ago 1.29GB 2025-07-12 20:26:27.441641 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ed5bf0762532 19 hours ago 1.06GB 2025-07-12 20:26:27.441653 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 3528d69772e2 19 hours ago 1.06GB 2025-07-12 20:26:27.441665 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 55b4043ace1e 19 hours ago 1.06GB 2025-07-12 20:26:27.441678 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 a7f0a5d9b28c 19 hours ago 1.15GB 2025-07-12 20:26:27.441690 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 820d96fc6871 19 hours ago 1.41GB 2025-07-12 20:26:27.441702 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9e62aa5265cd 19 hours ago 1.41GB 2025-07-12 20:26:27.441714 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 6e5bcb7465c5 19 hours ago 946MB 2025-07-12 20:26:27.441726 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b96ae4c576bd 19 hours ago 947MB 2025-07-12 20:26:27.441739 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c088724e55ba 19 hours ago 946MB 2025-07-12 20:26:27.441751 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4b91bbc5fcc8 19 hours ago 947MB 2025-07-12 20:26:27.637196 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:26:27.637302 | orchestrator | ++ semver latest 5.0.0 2025-07-12 20:26:27.687724 | orchestrator | 2025-07-12 20:26:27.687806 | orchestrator | ## Containers @ testbed-node-2 2025-07-12 20:26:27.687819 | orchestrator | 2025-07-12 20:26:27.687831 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 20:26:27.687841 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 20:26:27.687852 | orchestrator | + echo 2025-07-12 20:26:27.687863 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-12 20:26:27.687875 | orchestrator | + echo 2025-07-12 20:26:27.687886 | orchestrator | + osism container testbed-node-2 ps 2025-07-12 20:26:29.900328 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:26:29.900420 | orchestrator | 7574d8eeec95 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 20:26:29.900435 | orchestrator | bd706ceb314f registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 20:26:29.900447 | orchestrator | 14762b65622a registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 20:26:29.900458 | orchestrator | 4289580942ba registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-07-12 20:26:29.900468 | orchestrator | 2ca1e0ee7a98 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 20:26:29.900479 | orchestrator | 269286280a93 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-12 20:26:29.900511 | orchestrator | 19de73ba8e6c registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-07-12 20:26:29.900523 | orchestrator | e19dd225064f registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_api 2025-07-12 20:26:29.900533 | orchestrator | 67a33d654aef registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 20:26:29.900544 | orchestrator | 5bd72c8a1c81 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-07-12 20:26:29.900555 | orchestrator | b5eb58473054 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-07-12 20:26:29.900581 | orchestrator | 14a2b3249e4a registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-07-12 20:26:29.900593 | orchestrator | a28c57939c28 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) designate_producer 2025-07-12 20:26:29.900603 | orchestrator | da8a8274dc9d registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-12 20:26:29.900614 | orchestrator | 5415118fb2a5 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 20:26:29.900625 | orchestrator | 91a592dae791 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 20:26:29.900636 | orchestrator | 5af5a2adbd52 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 20:26:29.900647 | orchestrator | 32293ce1cc8e registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-07-12 20:26:29.900657 | orchestrator | 368da551e257 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-07-12 20:26:29.900668 | orchestrator | 7fe4057a612e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-07-12 20:26:29.900679 | orchestrator | c9c011a786d5 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-07-12 20:26:29.900714 | orchestrator | 680788e3011c registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-07-12 20:26:29.900734 | orchestrator | 3a430e209836 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 20:26:29.900752 | orchestrator | e893bc677891 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_elasticsearch_exporter 2025-07-12 20:26:29.900770 | orchestrator | aee920a9fd68 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-07-12 20:26:29.900788 | orchestrator | 92ecf33a4bf4 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 20:26:29.900821 | orchestrator | 86d4232da379 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-07-12 20:26:29.900841 | orchestrator | 2497f168ce3f registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-07-12 20:26:29.900859 | orchestrator | 3efcbf67a256 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-07-12 20:26:29.900879 | orchestrator | bd8d9cbe16de registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-07-12 20:26:29.900900 | orchestrator | 419a7f3c5f86 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-12 20:26:29.900918 | orchestrator | 0692ac293b1e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-07-12 20:26:29.900934 | orchestrator | ff30a0062e3a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-07-12 20:26:29.900947 | orchestrator | 93ada91314d2 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-07-12 20:26:29.900959 | orchestrator | c30f7830a2d8 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_fernet 2025-07-12 20:26:29.900971 | orchestrator | cf66578351e1 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-07-12 20:26:29.900983 | orchestrator | 5bfa8d46ecfb registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-07-12 20:26:29.901000 | orchestrator | 404cbd6fb1a9 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-07-12 20:26:29.901017 | orchestrator | 1ffa38851a84 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-07-12 20:26:29.901029 | orchestrator | 4512cdbd3478 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-07-12 20:26:29.901041 | orchestrator | 9931e09ff249 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-07-12 20:26:29.901053 | orchestrator | b77c60010008 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-07-12 20:26:29.901065 | orchestrator | a32b664315d0 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-07-12 20:26:29.901077 | orchestrator | 23b9aa505762 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-07-12 20:26:29.901147 | orchestrator | 00eb05a1fe2f registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_sb_db 2025-07-12 20:26:29.901169 | orchestrator | 4e099fbce6e9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_nb_db 2025-07-12 20:26:29.901182 | orchestrator | fa31d20792e3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-07-12 20:26:29.901195 | orchestrator | 96dcd108b950 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-07-12 20:26:29.901207 | orchestrator | 145ffe361713 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-07-12 20:26:29.901223 | orchestrator | 2d38579f3fe3 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-07-12 20:26:29.901234 | orchestrator | 0463285cd0bd registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) openvswitch_db 2025-07-12 20:26:29.901244 | orchestrator | e6af216a41aa registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis_sentinel 2025-07-12 20:26:29.901255 | orchestrator | 2379e5542363 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) redis 2025-07-12 20:26:29.901266 | orchestrator | 25bf388da12d registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-07-12 20:26:29.901276 | orchestrator | f648bbaebf69 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-12 20:26:29.901329 | orchestrator | a434e387245e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-07-12 20:26:29.901340 | orchestrator | 07584442950a registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-12 20:26:30.101458 | orchestrator | 2025-07-12 20:26:30.101546 | orchestrator | ## Images @ testbed-node-2 2025-07-12 20:26:30.101562 | orchestrator | 2025-07-12 20:26:30.101573 | orchestrator | + echo 2025-07-12 20:26:30.101585 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-12 20:26:30.101596 | orchestrator | + echo 2025-07-12 20:26:30.101607 | orchestrator | + osism container testbed-node-2 images 2025-07-12 20:26:32.086586 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:26:32.086676 | orchestrator | registry.osism.tech/osism/ceph-daemon reef fe9c699108e1 17 hours ago 1.27GB 2025-07-12 20:26:32.086690 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 da9bab98f1c4 19 hours ago 1.01GB 2025-07-12 20:26:32.086702 | orchestrator | registry.osism.tech/kolla/cron 2024.2 4ce8240a893c 19 hours ago 318MB 2025-07-12 20:26:32.086713 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f19504b04274 19 hours ago 318MB 2025-07-12 20:26:32.086724 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 ea215f3799eb 19 hours ago 375MB 2025-07-12 20:26:32.086734 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 f6a8ddc0fa19 19 hours ago 746MB 2025-07-12 20:26:32.086745 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 db9179df457c 19 hours ago 417MB 2025-07-12 20:26:32.086755 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 cb87a0b5a431 19 hours ago 628MB 2025-07-12 20:26:32.086766 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 2ee2aea4ecbb 19 hours ago 329MB 2025-07-12 20:26:32.086838 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 ec7afc7181a3 19 hours ago 326MB 2025-07-12 20:26:32.086851 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 9a6d9feb60b1 19 hours ago 1.55GB 2025-07-12 20:26:32.086862 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 b14bb9ff6f80 19 hours ago 1.59GB 2025-07-12 20:26:32.086873 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 aad3a3158749 19 hours ago 353MB 2025-07-12 20:26:32.086883 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 7007172fb408 19 hours ago 410MB 2025-07-12 20:26:32.086894 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 7743da2fe9b2 19 hours ago 358MB 2025-07-12 20:26:32.086905 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 e89c3afadc38 19 hours ago 344MB 2025-07-12 20:26:32.086915 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 2cebeabcbd0e 19 hours ago 351MB 2025-07-12 20:26:32.086926 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 adada41a764e 19 hours ago 1.21GB 2025-07-12 20:26:32.086937 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 15e39d968d77 19 hours ago 361MB 2025-07-12 20:26:32.086948 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 abe28dfb5ccc 19 hours ago 361MB 2025-07-12 20:26:32.086958 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 e8b0ed492d0f 19 hours ago 324MB 2025-07-12 20:26:32.086969 | orchestrator | registry.osism.tech/kolla/redis 2024.2 82d7de98b313 19 hours ago 324MB 2025-07-12 20:26:32.086980 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29b0dc955a2b 19 hours ago 590MB 2025-07-12 20:26:32.086990 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 6ad384c8beaf 19 hours ago 1.04GB 2025-07-12 20:26:32.087001 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 95944a9fdd62 19 hours ago 1.05GB 2025-07-12 20:26:32.087012 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 52bc7fc0663b 19 hours ago 1.06GB 2025-07-12 20:26:32.087026 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 0e5d94078a38 19 hours ago 1.05GB 2025-07-12 20:26:32.087047 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 d803f5dcba2b 19 hours ago 1.06GB 2025-07-12 20:26:32.087067 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 7bd70fa2eaca 19 hours ago 1.05GB 2025-07-12 20:26:32.087087 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 47afb51ae8f8 19 hours ago 1.05GB 2025-07-12 20:26:32.087113 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 f8a3d90ad64b 19 hours ago 1.1GB 2025-07-12 20:26:32.087137 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 1a864f84d2f1 19 hours ago 1.1GB 2025-07-12 20:26:32.087183 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 7b97136c8365 19 hours ago 1.1GB 2025-07-12 20:26:32.087204 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 6afb7ebf1f84 19 hours ago 1.12GB 2025-07-12 20:26:32.087223 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 270474f08bd9 19 hours ago 1.12GB 2025-07-12 20:26:32.087241 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 ea10afd51d8e 19 hours ago 1.24GB 2025-07-12 20:26:32.087273 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 a80373d5f022 19 hours ago 1.31GB 2025-07-12 20:26:32.087352 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 373788c4de01 19 hours ago 1.2GB 2025-07-12 20:26:32.087374 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 e76aee078f81 19 hours ago 1.11GB 2025-07-12 20:26:32.087409 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 b7f54fc3ae64 19 hours ago 1.13GB 2025-07-12 20:26:32.087448 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 24e61d9295a6 19 hours ago 1.11GB 2025-07-12 20:26:32.087463 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 3e1a1d846e00 19 hours ago 1.29GB 2025-07-12 20:26:32.087474 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 97653d20b217 19 hours ago 1.42GB 2025-07-12 20:26:32.087485 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 06efaffd4461 19 hours ago 1.29GB 2025-07-12 20:26:32.087496 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 344e73ee870a 19 hours ago 1.29GB 2025-07-12 20:26:32.087506 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 ed5bf0762532 19 hours ago 1.06GB 2025-07-12 20:26:32.087517 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 3528d69772e2 19 hours ago 1.06GB 2025-07-12 20:26:32.087528 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 55b4043ace1e 19 hours ago 1.06GB 2025-07-12 20:26:32.087538 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 a7f0a5d9b28c 19 hours ago 1.15GB 2025-07-12 20:26:32.087549 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 820d96fc6871 19 hours ago 1.41GB 2025-07-12 20:26:32.087560 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 9e62aa5265cd 19 hours ago 1.41GB 2025-07-12 20:26:32.087570 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 6e5bcb7465c5 19 hours ago 946MB 2025-07-12 20:26:32.087581 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c088724e55ba 19 hours ago 946MB 2025-07-12 20:26:32.087591 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 b96ae4c576bd 19 hours ago 947MB 2025-07-12 20:26:32.087602 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 4b91bbc5fcc8 19 hours ago 947MB 2025-07-12 20:26:32.453656 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-12 20:26:32.460194 | orchestrator | + set -e 2025-07-12 20:26:32.460270 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 20:26:32.460981 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 20:26:32.461007 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 20:26:32.461019 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 20:26:32.461032 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 20:26:32.461044 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 20:26:32.461059 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 20:26:32.461071 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 20:26:32.461082 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 20:26:32.461093 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 20:26:32.461104 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 20:26:32.461115 | orchestrator | ++ export ARA=false 2025-07-12 20:26:32.461126 | orchestrator | ++ ARA=false 2025-07-12 20:26:32.461136 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 20:26:32.461147 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 20:26:32.461158 | orchestrator | ++ export TEMPEST=false 2025-07-12 20:26:32.461168 | orchestrator | ++ TEMPEST=false 2025-07-12 20:26:32.461183 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 20:26:32.461194 | orchestrator | ++ IS_ZUUL=true 2025-07-12 20:26:32.461206 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 20:26:32.461216 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 20:26:32.461227 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 20:26:32.461269 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 20:26:32.461282 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 20:26:32.461363 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 20:26:32.461375 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 20:26:32.461386 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 20:26:32.461396 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 20:26:32.461407 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 20:26:32.461438 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 20:26:32.461449 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-12 20:26:32.469534 | orchestrator | + set -e 2025-07-12 20:26:32.469583 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:26:32.469595 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:26:32.469606 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:26:32.469617 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:26:32.469628 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:26:32.469638 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 20:26:32.470969 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 20:26:32.478569 | orchestrator | 2025-07-12 20:26:32.478631 | orchestrator | # Ceph status 2025-07-12 20:26:32.478643 | orchestrator | 2025-07-12 20:26:32.478654 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 20:26:32.478667 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 20:26:32.478678 | orchestrator | + echo 2025-07-12 20:26:32.478688 | orchestrator | + echo '# Ceph status' 2025-07-12 20:26:32.478699 | orchestrator | + echo 2025-07-12 20:26:32.478710 | orchestrator | + ceph -s 2025-07-12 20:26:33.041791 | orchestrator | cluster: 2025-07-12 20:26:33.042364 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-12 20:26:33.042378 | orchestrator | health: HEALTH_OK 2025-07-12 20:26:33.042383 | orchestrator | 2025-07-12 20:26:33.042389 | orchestrator | services: 2025-07-12 20:26:33.042394 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-07-12 20:26:33.042400 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-07-12 20:26:33.042405 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-12 20:26:33.042410 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 24m) 2025-07-12 20:26:33.042415 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-12 20:26:33.042420 | orchestrator | 2025-07-12 20:26:33.042425 | orchestrator | data: 2025-07-12 20:26:33.042430 | orchestrator | volumes: 1/1 healthy 2025-07-12 20:26:33.042435 | orchestrator | pools: 14 pools, 401 pgs 2025-07-12 20:26:33.042439 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-12 20:26:33.042444 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-12 20:26:33.042449 | orchestrator | pgs: 401 active+clean 2025-07-12 20:26:33.042454 | orchestrator | 2025-07-12 20:26:33.075352 | orchestrator | 2025-07-12 20:26:33.075441 | orchestrator | # Ceph versions 2025-07-12 20:26:33.075455 | orchestrator | 2025-07-12 20:26:33.075467 | orchestrator | + echo 2025-07-12 20:26:33.075479 | orchestrator | + echo '# Ceph versions' 2025-07-12 20:26:33.075491 | orchestrator | + echo 2025-07-12 20:26:33.075502 | orchestrator | + ceph versions 2025-07-12 20:26:33.655506 | orchestrator | { 2025-07-12 20:26:33.655611 | orchestrator | "mon": { 2025-07-12 20:26:33.655628 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:26:33.655640 | orchestrator | }, 2025-07-12 20:26:33.655665 | orchestrator | "mgr": { 2025-07-12 20:26:33.655687 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:26:33.655708 | orchestrator | }, 2025-07-12 20:26:33.655729 | orchestrator | "osd": { 2025-07-12 20:26:33.655751 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-12 20:26:33.655770 | orchestrator | }, 2025-07-12 20:26:33.655781 | orchestrator | "mds": { 2025-07-12 20:26:33.655792 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:26:33.655802 | orchestrator | }, 2025-07-12 20:26:33.655813 | orchestrator | "rgw": { 2025-07-12 20:26:33.655824 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:26:33.655834 | orchestrator | }, 2025-07-12 20:26:33.655845 | orchestrator | "overall": { 2025-07-12 20:26:33.655856 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-12 20:26:33.655867 | orchestrator | } 2025-07-12 20:26:33.655878 | orchestrator | } 2025-07-12 20:26:33.686589 | orchestrator | 2025-07-12 20:26:33.686686 | orchestrator | # Ceph OSD tree 2025-07-12 20:26:33.686710 | orchestrator | 2025-07-12 20:26:33.686730 | orchestrator | + echo 2025-07-12 20:26:33.686750 | orchestrator | + echo '# Ceph OSD tree' 2025-07-12 20:26:33.686762 | orchestrator | + echo 2025-07-12 20:26:33.686785 | orchestrator | + ceph osd df tree 2025-07-12 20:26:34.171633 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-12 20:26:34.171805 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-12 20:26:34.171819 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-12 20:26:34.171828 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.52 0.93 189 up osd.0 2025-07-12 20:26:34.171836 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.32 1.07 201 up osd.3 2025-07-12 20:26:34.171844 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-12 20:26:34.171852 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.75 1.14 192 up osd.1 2025-07-12 20:26:34.171860 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 70 MiB 19 GiB 5.09 0.86 196 up osd.4 2025-07-12 20:26:34.171867 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-12 20:26:34.171875 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.00 1.18 206 up osd.2 2025-07-12 20:26:34.171883 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 988 MiB 915 MiB 1 KiB 74 MiB 19 GiB 4.83 0.82 186 up osd.5 2025-07-12 20:26:34.171891 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-12 20:26:34.171899 | orchestrator | MIN/MAX VAR: 0.82/1.18 STDDEV: 0.82 2025-07-12 20:26:34.228283 | orchestrator | 2025-07-12 20:26:34.228452 | orchestrator | # Ceph monitor status 2025-07-12 20:26:34.228466 | orchestrator | 2025-07-12 20:26:34.228478 | orchestrator | + echo 2025-07-12 20:26:34.228595 | orchestrator | + echo '# Ceph monitor status' 2025-07-12 20:26:34.228609 | orchestrator | + echo 2025-07-12 20:26:34.228620 | orchestrator | + ceph mon stat 2025-07-12 20:26:34.841806 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-12 20:26:34.886724 | orchestrator | 2025-07-12 20:26:34.886852 | orchestrator | # Ceph quorum status 2025-07-12 20:26:34.886877 | orchestrator | 2025-07-12 20:26:34.887036 | orchestrator | + echo 2025-07-12 20:26:34.887063 | orchestrator | + echo '# Ceph quorum status' 2025-07-12 20:26:34.887080 | orchestrator | + echo 2025-07-12 20:26:34.887098 | orchestrator | + ceph quorum_status 2025-07-12 20:26:34.887137 | orchestrator | + jq 2025-07-12 20:26:35.534249 | orchestrator | { 2025-07-12 20:26:35.534336 | orchestrator | "election_epoch": 4, 2025-07-12 20:26:35.534342 | orchestrator | "quorum": [ 2025-07-12 20:26:35.534347 | orchestrator | 0, 2025-07-12 20:26:35.534351 | orchestrator | 1, 2025-07-12 20:26:35.534355 | orchestrator | 2 2025-07-12 20:26:35.534358 | orchestrator | ], 2025-07-12 20:26:35.534362 | orchestrator | "quorum_names": [ 2025-07-12 20:26:35.534367 | orchestrator | "testbed-node-0", 2025-07-12 20:26:35.534371 | orchestrator | "testbed-node-1", 2025-07-12 20:26:35.534374 | orchestrator | "testbed-node-2" 2025-07-12 20:26:35.534378 | orchestrator | ], 2025-07-12 20:26:35.534382 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-12 20:26:35.534387 | orchestrator | "quorum_age": 1646, 2025-07-12 20:26:35.534391 | orchestrator | "features": { 2025-07-12 20:26:35.534395 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-12 20:26:35.534398 | orchestrator | "quorum_mon": [ 2025-07-12 20:26:35.534402 | orchestrator | "kraken", 2025-07-12 20:26:35.534406 | orchestrator | "luminous", 2025-07-12 20:26:35.534410 | orchestrator | "mimic", 2025-07-12 20:26:35.534414 | orchestrator | "osdmap-prune", 2025-07-12 20:26:35.534418 | orchestrator | "nautilus", 2025-07-12 20:26:35.534421 | orchestrator | "octopus", 2025-07-12 20:26:35.534425 | orchestrator | "pacific", 2025-07-12 20:26:35.534429 | orchestrator | "elector-pinging", 2025-07-12 20:26:35.534432 | orchestrator | "quincy", 2025-07-12 20:26:35.534485 | orchestrator | "reef" 2025-07-12 20:26:35.534490 | orchestrator | ] 2025-07-12 20:26:35.534493 | orchestrator | }, 2025-07-12 20:26:35.534497 | orchestrator | "monmap": { 2025-07-12 20:26:35.534501 | orchestrator | "epoch": 1, 2025-07-12 20:26:35.534505 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-12 20:26:35.534509 | orchestrator | "modified": "2025-07-12T19:58:57.188387Z", 2025-07-12 20:26:35.534513 | orchestrator | "created": "2025-07-12T19:58:57.188387Z", 2025-07-12 20:26:35.534517 | orchestrator | "min_mon_release": 18, 2025-07-12 20:26:35.534521 | orchestrator | "min_mon_release_name": "reef", 2025-07-12 20:26:35.534525 | orchestrator | "election_strategy": 1, 2025-07-12 20:26:35.534528 | orchestrator | "disallowed_leaders: ": "", 2025-07-12 20:26:35.534532 | orchestrator | "stretch_mode": false, 2025-07-12 20:26:35.534536 | orchestrator | "tiebreaker_mon": "", 2025-07-12 20:26:35.534539 | orchestrator | "removed_ranks: ": "", 2025-07-12 20:26:35.534543 | orchestrator | "features": { 2025-07-12 20:26:35.534547 | orchestrator | "persistent": [ 2025-07-12 20:26:35.534551 | orchestrator | "kraken", 2025-07-12 20:26:35.534554 | orchestrator | "luminous", 2025-07-12 20:26:35.534558 | orchestrator | "mimic", 2025-07-12 20:26:35.534562 | orchestrator | "osdmap-prune", 2025-07-12 20:26:35.534565 | orchestrator | "nautilus", 2025-07-12 20:26:35.534569 | orchestrator | "octopus", 2025-07-12 20:26:35.534573 | orchestrator | "pacific", 2025-07-12 20:26:35.534589 | orchestrator | "elector-pinging", 2025-07-12 20:26:35.534593 | orchestrator | "quincy", 2025-07-12 20:26:35.534597 | orchestrator | "reef" 2025-07-12 20:26:35.534600 | orchestrator | ], 2025-07-12 20:26:35.534604 | orchestrator | "optional": [] 2025-07-12 20:26:35.534608 | orchestrator | }, 2025-07-12 20:26:35.534611 | orchestrator | "mons": [ 2025-07-12 20:26:35.534615 | orchestrator | { 2025-07-12 20:26:35.534619 | orchestrator | "rank": 0, 2025-07-12 20:26:35.534623 | orchestrator | "name": "testbed-node-0", 2025-07-12 20:26:35.534627 | orchestrator | "public_addrs": { 2025-07-12 20:26:35.534631 | orchestrator | "addrvec": [ 2025-07-12 20:26:35.534634 | orchestrator | { 2025-07-12 20:26:35.534638 | orchestrator | "type": "v2", 2025-07-12 20:26:35.534642 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-12 20:26:35.534645 | orchestrator | "nonce": 0 2025-07-12 20:26:35.534649 | orchestrator | }, 2025-07-12 20:26:35.534653 | orchestrator | { 2025-07-12 20:26:35.534657 | orchestrator | "type": "v1", 2025-07-12 20:26:35.534660 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-12 20:26:35.534664 | orchestrator | "nonce": 0 2025-07-12 20:26:35.534668 | orchestrator | } 2025-07-12 20:26:35.534671 | orchestrator | ] 2025-07-12 20:26:35.534675 | orchestrator | }, 2025-07-12 20:26:35.534679 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-12 20:26:35.534683 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-12 20:26:35.534686 | orchestrator | "priority": 0, 2025-07-12 20:26:35.534690 | orchestrator | "weight": 0, 2025-07-12 20:26:35.534694 | orchestrator | "crush_location": "{}" 2025-07-12 20:26:35.534697 | orchestrator | }, 2025-07-12 20:26:35.534701 | orchestrator | { 2025-07-12 20:26:35.534705 | orchestrator | "rank": 1, 2025-07-12 20:26:35.534708 | orchestrator | "name": "testbed-node-1", 2025-07-12 20:26:35.534712 | orchestrator | "public_addrs": { 2025-07-12 20:26:35.534716 | orchestrator | "addrvec": [ 2025-07-12 20:26:35.534719 | orchestrator | { 2025-07-12 20:26:35.534723 | orchestrator | "type": "v2", 2025-07-12 20:26:35.534727 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-12 20:26:35.534730 | orchestrator | "nonce": 0 2025-07-12 20:26:35.534734 | orchestrator | }, 2025-07-12 20:26:35.534738 | orchestrator | { 2025-07-12 20:26:35.534741 | orchestrator | "type": "v1", 2025-07-12 20:26:35.534745 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-12 20:26:35.534749 | orchestrator | "nonce": 0 2025-07-12 20:26:35.534752 | orchestrator | } 2025-07-12 20:26:35.534756 | orchestrator | ] 2025-07-12 20:26:35.534760 | orchestrator | }, 2025-07-12 20:26:35.534764 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-12 20:26:35.534767 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-12 20:26:35.534771 | orchestrator | "priority": 0, 2025-07-12 20:26:35.534775 | orchestrator | "weight": 0, 2025-07-12 20:26:35.534778 | orchestrator | "crush_location": "{}" 2025-07-12 20:26:35.534782 | orchestrator | }, 2025-07-12 20:26:35.534786 | orchestrator | { 2025-07-12 20:26:35.534801 | orchestrator | "rank": 2, 2025-07-12 20:26:35.534805 | orchestrator | "name": "testbed-node-2", 2025-07-12 20:26:35.534809 | orchestrator | "public_addrs": { 2025-07-12 20:26:35.534813 | orchestrator | "addrvec": [ 2025-07-12 20:26:35.534816 | orchestrator | { 2025-07-12 20:26:35.534820 | orchestrator | "type": "v2", 2025-07-12 20:26:35.534823 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-12 20:26:35.534827 | orchestrator | "nonce": 0 2025-07-12 20:26:35.534831 | orchestrator | }, 2025-07-12 20:26:35.534835 | orchestrator | { 2025-07-12 20:26:35.534838 | orchestrator | "type": "v1", 2025-07-12 20:26:35.534842 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-12 20:26:35.534846 | orchestrator | "nonce": 0 2025-07-12 20:26:35.534849 | orchestrator | } 2025-07-12 20:26:35.534853 | orchestrator | ] 2025-07-12 20:26:35.534857 | orchestrator | }, 2025-07-12 20:26:35.534860 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-12 20:26:35.534864 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-12 20:26:35.534868 | orchestrator | "priority": 0, 2025-07-12 20:26:35.534871 | orchestrator | "weight": 0, 2025-07-12 20:26:35.534875 | orchestrator | "crush_location": "{}" 2025-07-12 20:26:35.534879 | orchestrator | } 2025-07-12 20:26:35.534882 | orchestrator | ] 2025-07-12 20:26:35.534886 | orchestrator | } 2025-07-12 20:26:35.534890 | orchestrator | } 2025-07-12 20:26:35.535457 | orchestrator | 2025-07-12 20:26:35.535467 | orchestrator | # Ceph free space status 2025-07-12 20:26:35.535471 | orchestrator | 2025-07-12 20:26:35.535475 | orchestrator | + echo 2025-07-12 20:26:35.535479 | orchestrator | + echo '# Ceph free space status' 2025-07-12 20:26:35.535483 | orchestrator | + echo 2025-07-12 20:26:35.535487 | orchestrator | + ceph df 2025-07-12 20:26:36.138221 | orchestrator | --- RAW STORAGE --- 2025-07-12 20:26:36.138354 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-12 20:26:36.138382 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 20:26:36.138395 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 20:26:36.138408 | orchestrator | 2025-07-12 20:26:36.138421 | orchestrator | --- POOLS --- 2025-07-12 20:26:36.138435 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-12 20:26:36.138450 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-12 20:26:36.138464 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:26:36.138476 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-12 20:26:36.138490 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:26:36.138503 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:26:36.138515 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-12 20:26:36.138528 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-12 20:26:36.138540 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:26:36.138605 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-07-12 20:26:36.138619 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:26:36.138631 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:26:36.138642 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.94 35 GiB 2025-07-12 20:26:36.138653 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:26:36.138664 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:26:36.189622 | orchestrator | ++ semver latest 5.0.0 2025-07-12 20:26:36.241580 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-12 20:26:36.241663 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-12 20:26:36.241673 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-12 20:26:36.241682 | orchestrator | + osism apply facts 2025-07-12 20:26:48.020812 | orchestrator | 2025-07-12 20:26:48 | INFO  | Task ae8a5382-e0ea-4f2a-8a33-839fb3ec43e4 (facts) was prepared for execution. 2025-07-12 20:26:48.020916 | orchestrator | 2025-07-12 20:26:48 | INFO  | It takes a moment until task ae8a5382-e0ea-4f2a-8a33-839fb3ec43e4 (facts) has been started and output is visible here. 2025-07-12 20:27:00.569211 | orchestrator | 2025-07-12 20:27:00.569419 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 20:27:00.569449 | orchestrator | 2025-07-12 20:27:00.569461 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 20:27:00.569471 | orchestrator | Saturday 12 July 2025 20:26:51 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-07-12 20:27:00.569481 | orchestrator | ok: [testbed-manager] 2025-07-12 20:27:00.569492 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:00.569502 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:00.569512 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:00.569521 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:00.569530 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:00.569540 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:00.569549 | orchestrator | 2025-07-12 20:27:00.569559 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 20:27:00.569568 | orchestrator | Saturday 12 July 2025 20:26:53 +0000 (0:00:01.357) 0:00:01.610 ********* 2025-07-12 20:27:00.569578 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:27:00.569588 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:00.569597 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:27:00.569606 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:27:00.569616 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:00.569625 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:00.569634 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:00.569644 | orchestrator | 2025-07-12 20:27:00.569653 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 20:27:00.569663 | orchestrator | 2025-07-12 20:27:00.569672 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 20:27:00.569681 | orchestrator | Saturday 12 July 2025 20:26:54 +0000 (0:00:01.200) 0:00:02.810 ********* 2025-07-12 20:27:00.569691 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:00.569700 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:00.569710 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:00.569720 | orchestrator | ok: [testbed-manager] 2025-07-12 20:27:00.569730 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:00.569741 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:00.569752 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:00.569762 | orchestrator | 2025-07-12 20:27:00.569773 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 20:27:00.569783 | orchestrator | 2025-07-12 20:27:00.569795 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 20:27:00.569806 | orchestrator | Saturday 12 July 2025 20:26:59 +0000 (0:00:05.134) 0:00:07.945 ********* 2025-07-12 20:27:00.569817 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:27:00.569827 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:00.569838 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:27:00.569848 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:27:00.569859 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:00.569870 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:00.569880 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:00.569890 | orchestrator | 2025-07-12 20:27:00.569901 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:27:00.569912 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.569925 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.569936 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.569947 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.569985 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.569996 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.570005 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:00.570014 | orchestrator | 2025-07-12 20:27:00.570089 | orchestrator | 2025-07-12 20:27:00.570099 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:27:00.570108 | orchestrator | Saturday 12 July 2025 20:27:00 +0000 (0:00:00.667) 0:00:08.613 ********* 2025-07-12 20:27:00.570118 | orchestrator | =============================================================================== 2025-07-12 20:27:00.570127 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.13s 2025-07-12 20:27:00.570137 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2025-07-12 20:27:00.570178 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.20s 2025-07-12 20:27:00.570189 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2025-07-12 20:27:00.879546 | orchestrator | + osism validate ceph-mons 2025-07-12 20:27:31.435754 | orchestrator | 2025-07-12 20:27:31.435822 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-12 20:27:31.435832 | orchestrator | 2025-07-12 20:27:31.435839 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 20:27:31.435846 | orchestrator | Saturday 12 July 2025 20:27:16 +0000 (0:00:00.406) 0:00:00.406 ********* 2025-07-12 20:27:31.435852 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:31.435859 | orchestrator | 2025-07-12 20:27:31.435865 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 20:27:31.435871 | orchestrator | Saturday 12 July 2025 20:27:17 +0000 (0:00:00.550) 0:00:00.956 ********* 2025-07-12 20:27:31.435877 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:31.435883 | orchestrator | 2025-07-12 20:27:31.435890 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 20:27:31.435896 | orchestrator | Saturday 12 July 2025 20:27:18 +0000 (0:00:00.766) 0:00:01.723 ********* 2025-07-12 20:27:31.435902 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.435909 | orchestrator | 2025-07-12 20:27:31.435915 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 20:27:31.435921 | orchestrator | Saturday 12 July 2025 20:27:18 +0000 (0:00:00.189) 0:00:01.913 ********* 2025-07-12 20:27:31.435928 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.435934 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:31.435940 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:31.435946 | orchestrator | 2025-07-12 20:27:31.435953 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 20:27:31.435959 | orchestrator | Saturday 12 July 2025 20:27:18 +0000 (0:00:00.262) 0:00:02.176 ********* 2025-07-12 20:27:31.435965 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:31.435982 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.435989 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:31.435995 | orchestrator | 2025-07-12 20:27:31.436012 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 20:27:31.436019 | orchestrator | Saturday 12 July 2025 20:27:19 +0000 (0:00:00.944) 0:00:03.120 ********* 2025-07-12 20:27:31.436025 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436032 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:27:31.436038 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:27:31.436044 | orchestrator | 2025-07-12 20:27:31.436056 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 20:27:31.436096 | orchestrator | Saturday 12 July 2025 20:27:19 +0000 (0:00:00.293) 0:00:03.414 ********* 2025-07-12 20:27:31.436104 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436110 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:31.436116 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:31.436122 | orchestrator | 2025-07-12 20:27:31.436128 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:27:31.436134 | orchestrator | Saturday 12 July 2025 20:27:20 +0000 (0:00:00.347) 0:00:03.761 ********* 2025-07-12 20:27:31.436140 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436146 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:31.436152 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:31.436158 | orchestrator | 2025-07-12 20:27:31.436165 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-12 20:27:31.436171 | orchestrator | Saturday 12 July 2025 20:27:20 +0000 (0:00:00.277) 0:00:04.039 ********* 2025-07-12 20:27:31.436177 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:27:31.436189 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:27:31.436195 | orchestrator | 2025-07-12 20:27:31.436202 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-12 20:27:31.436208 | orchestrator | Saturday 12 July 2025 20:27:20 +0000 (0:00:00.258) 0:00:04.298 ********* 2025-07-12 20:27:31.436216 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436227 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:27:31.436237 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:27:31.436247 | orchestrator | 2025-07-12 20:27:31.436256 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:27:31.436267 | orchestrator | Saturday 12 July 2025 20:27:21 +0000 (0:00:00.276) 0:00:04.575 ********* 2025-07-12 20:27:31.436277 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436286 | orchestrator | 2025-07-12 20:27:31.436296 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:27:31.436306 | orchestrator | Saturday 12 July 2025 20:27:21 +0000 (0:00:00.501) 0:00:05.076 ********* 2025-07-12 20:27:31.436317 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436326 | orchestrator | 2025-07-12 20:27:31.436337 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:27:31.436364 | orchestrator | Saturday 12 July 2025 20:27:21 +0000 (0:00:00.245) 0:00:05.322 ********* 2025-07-12 20:27:31.436375 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436385 | orchestrator | 2025-07-12 20:27:31.436396 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:27:31.436407 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.225) 0:00:05.547 ********* 2025-07-12 20:27:31.436418 | orchestrator | 2025-07-12 20:27:31.436429 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:27:31.436439 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.065) 0:00:05.612 ********* 2025-07-12 20:27:31.436450 | orchestrator | 2025-07-12 20:27:31.436460 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:27:31.436480 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.074) 0:00:05.686 ********* 2025-07-12 20:27:31.436490 | orchestrator | 2025-07-12 20:27:31.436506 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:27:31.436516 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.066) 0:00:05.753 ********* 2025-07-12 20:27:31.436527 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436537 | orchestrator | 2025-07-12 20:27:31.436548 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 20:27:31.436558 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.242) 0:00:05.996 ********* 2025-07-12 20:27:31.436569 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436579 | orchestrator | 2025-07-12 20:27:31.436602 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-12 20:27:31.436619 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.256) 0:00:06.253 ********* 2025-07-12 20:27:31.436630 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436640 | orchestrator | 2025-07-12 20:27:31.436651 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-12 20:27:31.436661 | orchestrator | Saturday 12 July 2025 20:27:22 +0000 (0:00:00.133) 0:00:06.386 ********* 2025-07-12 20:27:31.436672 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:27:31.436682 | orchestrator | 2025-07-12 20:27:31.436693 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-12 20:27:31.436703 | orchestrator | Saturday 12 July 2025 20:27:24 +0000 (0:00:01.596) 0:00:07.982 ********* 2025-07-12 20:27:31.436713 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436723 | orchestrator | 2025-07-12 20:27:31.436734 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-12 20:27:31.436744 | orchestrator | Saturday 12 July 2025 20:27:24 +0000 (0:00:00.348) 0:00:08.330 ********* 2025-07-12 20:27:31.436754 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436764 | orchestrator | 2025-07-12 20:27:31.436774 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-12 20:27:31.436785 | orchestrator | Saturday 12 July 2025 20:27:25 +0000 (0:00:00.330) 0:00:08.661 ********* 2025-07-12 20:27:31.436795 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436806 | orchestrator | 2025-07-12 20:27:31.436816 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-12 20:27:31.436826 | orchestrator | Saturday 12 July 2025 20:27:25 +0000 (0:00:00.343) 0:00:09.004 ********* 2025-07-12 20:27:31.436837 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436847 | orchestrator | 2025-07-12 20:27:31.436857 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-12 20:27:31.436867 | orchestrator | Saturday 12 July 2025 20:27:25 +0000 (0:00:00.319) 0:00:09.324 ********* 2025-07-12 20:27:31.436878 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.436888 | orchestrator | 2025-07-12 20:27:31.436898 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-12 20:27:31.436908 | orchestrator | Saturday 12 July 2025 20:27:25 +0000 (0:00:00.114) 0:00:09.439 ********* 2025-07-12 20:27:31.436919 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436929 | orchestrator | 2025-07-12 20:27:31.436939 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-12 20:27:31.436949 | orchestrator | Saturday 12 July 2025 20:27:26 +0000 (0:00:00.129) 0:00:09.568 ********* 2025-07-12 20:27:31.436959 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.436970 | orchestrator | 2025-07-12 20:27:31.436980 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-12 20:27:31.436990 | orchestrator | Saturday 12 July 2025 20:27:26 +0000 (0:00:00.102) 0:00:09.670 ********* 2025-07-12 20:27:31.437000 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:27:31.437011 | orchestrator | 2025-07-12 20:27:31.437021 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-12 20:27:31.437031 | orchestrator | Saturday 12 July 2025 20:27:27 +0000 (0:00:01.420) 0:00:11.091 ********* 2025-07-12 20:27:31.437042 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.437053 | orchestrator | 2025-07-12 20:27:31.437064 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-12 20:27:31.437075 | orchestrator | Saturday 12 July 2025 20:27:27 +0000 (0:00:00.302) 0:00:11.393 ********* 2025-07-12 20:27:31.437087 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.437098 | orchestrator | 2025-07-12 20:27:31.437110 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-12 20:27:31.437122 | orchestrator | Saturday 12 July 2025 20:27:28 +0000 (0:00:00.165) 0:00:11.559 ********* 2025-07-12 20:27:31.437133 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:27:31.437141 | orchestrator | 2025-07-12 20:27:31.437148 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-12 20:27:31.437159 | orchestrator | Saturday 12 July 2025 20:27:28 +0000 (0:00:00.148) 0:00:11.707 ********* 2025-07-12 20:27:31.437165 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.437171 | orchestrator | 2025-07-12 20:27:31.437177 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-12 20:27:31.437183 | orchestrator | Saturday 12 July 2025 20:27:28 +0000 (0:00:00.142) 0:00:11.850 ********* 2025-07-12 20:27:31.437189 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.437195 | orchestrator | 2025-07-12 20:27:31.437202 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 20:27:31.437208 | orchestrator | Saturday 12 July 2025 20:27:28 +0000 (0:00:00.332) 0:00:12.182 ********* 2025-07-12 20:27:31.437214 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:31.437220 | orchestrator | 2025-07-12 20:27:31.437226 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 20:27:31.437232 | orchestrator | Saturday 12 July 2025 20:27:29 +0000 (0:00:00.370) 0:00:12.552 ********* 2025-07-12 20:27:31.437238 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:27:31.437244 | orchestrator | 2025-07-12 20:27:31.437250 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:27:31.437256 | orchestrator | Saturday 12 July 2025 20:27:29 +0000 (0:00:00.251) 0:00:12.804 ********* 2025-07-12 20:27:31.437262 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:31.437270 | orchestrator | 2025-07-12 20:27:31.437282 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:27:31.437288 | orchestrator | Saturday 12 July 2025 20:27:30 +0000 (0:00:01.498) 0:00:14.303 ********* 2025-07-12 20:27:31.437294 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:31.437300 | orchestrator | 2025-07-12 20:27:31.437306 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:27:31.437312 | orchestrator | Saturday 12 July 2025 20:27:31 +0000 (0:00:00.260) 0:00:14.564 ********* 2025-07-12 20:27:31.437318 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:31.437325 | orchestrator | 2025-07-12 20:27:31.437336 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:27:33.663119 | orchestrator | Saturday 12 July 2025 20:27:31 +0000 (0:00:00.224) 0:00:14.788 ********* 2025-07-12 20:27:33.663226 | orchestrator | 2025-07-12 20:27:33.663238 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:27:33.663250 | orchestrator | Saturday 12 July 2025 20:27:31 +0000 (0:00:00.053) 0:00:14.842 ********* 2025-07-12 20:27:33.663257 | orchestrator | 2025-07-12 20:27:33.663264 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:27:33.663272 | orchestrator | Saturday 12 July 2025 20:27:31 +0000 (0:00:00.053) 0:00:14.896 ********* 2025-07-12 20:27:33.663279 | orchestrator | 2025-07-12 20:27:33.663285 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 20:27:33.663292 | orchestrator | Saturday 12 July 2025 20:27:31 +0000 (0:00:00.056) 0:00:14.952 ********* 2025-07-12 20:27:33.663334 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:27:33.663417 | orchestrator | 2025-07-12 20:27:33.663426 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:27:33.663432 | orchestrator | Saturday 12 July 2025 20:27:32 +0000 (0:00:01.361) 0:00:16.314 ********* 2025-07-12 20:27:33.663439 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 20:27:33.663445 | orchestrator |  "msg": [ 2025-07-12 20:27:33.663453 | orchestrator |  "Validator run completed.", 2025-07-12 20:27:33.663461 | orchestrator |  "You can find the report file here:", 2025-07-12 20:27:33.663467 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-12T20:27:17+00:00-report.json", 2025-07-12 20:27:33.663475 | orchestrator |  "on the following host:", 2025-07-12 20:27:33.663482 | orchestrator |  "testbed-manager" 2025-07-12 20:27:33.663511 | orchestrator |  ] 2025-07-12 20:27:33.663519 | orchestrator | } 2025-07-12 20:27:33.663525 | orchestrator | 2025-07-12 20:27:33.663531 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:27:33.663539 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 20:27:33.663547 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:33.663554 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:27:33.663561 | orchestrator | 2025-07-12 20:27:33.663567 | orchestrator | 2025-07-12 20:27:33.663573 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:27:33.663580 | orchestrator | Saturday 12 July 2025 20:27:33 +0000 (0:00:00.521) 0:00:16.835 ********* 2025-07-12 20:27:33.663587 | orchestrator | =============================================================================== 2025-07-12 20:27:33.663593 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.60s 2025-07-12 20:27:33.663600 | orchestrator | Aggregate test results step one ----------------------------------------- 1.50s 2025-07-12 20:27:33.663606 | orchestrator | Gather status data ------------------------------------------------------ 1.42s 2025-07-12 20:27:33.663613 | orchestrator | Write report file ------------------------------------------------------- 1.36s 2025-07-12 20:27:33.663619 | orchestrator | Get container info ------------------------------------------------------ 0.94s 2025-07-12 20:27:33.663625 | orchestrator | Create report output directory ------------------------------------------ 0.77s 2025-07-12 20:27:33.663647 | orchestrator | Get timestamp for report file ------------------------------------------- 0.55s 2025-07-12 20:27:33.663653 | orchestrator | Print report file information ------------------------------------------- 0.52s 2025-07-12 20:27:33.663659 | orchestrator | Aggregate test results step one ----------------------------------------- 0.50s 2025-07-12 20:27:33.663666 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.37s 2025-07-12 20:27:33.663674 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-07-12 20:27:33.663682 | orchestrator | Set test result to passed if container is existing ---------------------- 0.35s 2025-07-12 20:27:33.663690 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-07-12 20:27:33.663698 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-07-12 20:27:33.663705 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-07-12 20:27:33.663713 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-07-12 20:27:33.663720 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-07-12 20:27:33.663728 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-07-12 20:27:33.663736 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-07-12 20:27:33.663744 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.28s 2025-07-12 20:27:34.022273 | orchestrator | + osism validate ceph-mgrs 2025-07-12 20:28:05.548627 | orchestrator | 2025-07-12 20:28:05.548789 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-12 20:28:05.548807 | orchestrator | 2025-07-12 20:28:05.548817 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 20:28:05.548826 | orchestrator | Saturday 12 July 2025 20:27:50 +0000 (0:00:00.439) 0:00:00.439 ********* 2025-07-12 20:28:05.548835 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.548844 | orchestrator | 2025-07-12 20:28:05.548853 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 20:28:05.548862 | orchestrator | Saturday 12 July 2025 20:27:51 +0000 (0:00:00.677) 0:00:01.117 ********* 2025-07-12 20:28:05.548903 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.548913 | orchestrator | 2025-07-12 20:28:05.548922 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 20:28:05.548930 | orchestrator | Saturday 12 July 2025 20:27:52 +0000 (0:00:00.847) 0:00:01.964 ********* 2025-07-12 20:28:05.548939 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.548949 | orchestrator | 2025-07-12 20:28:05.548958 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 20:28:05.548967 | orchestrator | Saturday 12 July 2025 20:27:52 +0000 (0:00:00.255) 0:00:02.219 ********* 2025-07-12 20:28:05.548975 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.548984 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:05.548993 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:05.549001 | orchestrator | 2025-07-12 20:28:05.549010 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 20:28:05.549019 | orchestrator | Saturday 12 July 2025 20:27:52 +0000 (0:00:00.311) 0:00:02.531 ********* 2025-07-12 20:28:05.549028 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549036 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:05.549044 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:05.549053 | orchestrator | 2025-07-12 20:28:05.549062 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 20:28:05.549072 | orchestrator | Saturday 12 July 2025 20:27:53 +0000 (0:00:01.010) 0:00:03.541 ********* 2025-07-12 20:28:05.549081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549090 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:05.549098 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:05.549107 | orchestrator | 2025-07-12 20:28:05.549115 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 20:28:05.549124 | orchestrator | Saturday 12 July 2025 20:27:54 +0000 (0:00:00.289) 0:00:03.831 ********* 2025-07-12 20:28:05.549132 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549141 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:05.549151 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:05.549161 | orchestrator | 2025-07-12 20:28:05.549171 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:28:05.549181 | orchestrator | Saturday 12 July 2025 20:27:54 +0000 (0:00:00.535) 0:00:04.366 ********* 2025-07-12 20:28:05.549192 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549202 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:05.549211 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:05.549222 | orchestrator | 2025-07-12 20:28:05.549232 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-12 20:28:05.549241 | orchestrator | Saturday 12 July 2025 20:27:54 +0000 (0:00:00.324) 0:00:04.691 ********* 2025-07-12 20:28:05.549251 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549262 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:05.549272 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:05.549282 | orchestrator | 2025-07-12 20:28:05.549292 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-12 20:28:05.549303 | orchestrator | Saturday 12 July 2025 20:27:55 +0000 (0:00:00.285) 0:00:04.976 ********* 2025-07-12 20:28:05.549313 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549322 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:05.549333 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:05.549362 | orchestrator | 2025-07-12 20:28:05.549374 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:28:05.549384 | orchestrator | Saturday 12 July 2025 20:27:55 +0000 (0:00:00.328) 0:00:05.305 ********* 2025-07-12 20:28:05.549394 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549404 | orchestrator | 2025-07-12 20:28:05.549414 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:28:05.549490 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.688) 0:00:05.994 ********* 2025-07-12 20:28:05.549511 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549521 | orchestrator | 2025-07-12 20:28:05.549530 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:28:05.549539 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.262) 0:00:06.256 ********* 2025-07-12 20:28:05.549548 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549556 | orchestrator | 2025-07-12 20:28:05.549565 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:05.549573 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.264) 0:00:06.520 ********* 2025-07-12 20:28:05.549582 | orchestrator | 2025-07-12 20:28:05.549591 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:05.549599 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.068) 0:00:06.589 ********* 2025-07-12 20:28:05.549608 | orchestrator | 2025-07-12 20:28:05.549616 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:05.549625 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.067) 0:00:06.656 ********* 2025-07-12 20:28:05.549633 | orchestrator | 2025-07-12 20:28:05.549642 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:28:05.549651 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.071) 0:00:06.728 ********* 2025-07-12 20:28:05.549660 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549669 | orchestrator | 2025-07-12 20:28:05.549677 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 20:28:05.549686 | orchestrator | Saturday 12 July 2025 20:27:57 +0000 (0:00:00.272) 0:00:07.001 ********* 2025-07-12 20:28:05.549695 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549703 | orchestrator | 2025-07-12 20:28:05.549734 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-12 20:28:05.549744 | orchestrator | Saturday 12 July 2025 20:27:57 +0000 (0:00:00.236) 0:00:07.237 ********* 2025-07-12 20:28:05.549753 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549761 | orchestrator | 2025-07-12 20:28:05.549770 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-12 20:28:05.549778 | orchestrator | Saturday 12 July 2025 20:27:57 +0000 (0:00:00.142) 0:00:07.380 ********* 2025-07-12 20:28:05.549787 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:05.549796 | orchestrator | 2025-07-12 20:28:05.549804 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-12 20:28:05.549813 | orchestrator | Saturday 12 July 2025 20:27:59 +0000 (0:00:01.986) 0:00:09.366 ********* 2025-07-12 20:28:05.549821 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549830 | orchestrator | 2025-07-12 20:28:05.549839 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-12 20:28:05.549847 | orchestrator | Saturday 12 July 2025 20:27:59 +0000 (0:00:00.258) 0:00:09.624 ********* 2025-07-12 20:28:05.549856 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549864 | orchestrator | 2025-07-12 20:28:05.549873 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-12 20:28:05.549881 | orchestrator | Saturday 12 July 2025 20:28:00 +0000 (0:00:00.782) 0:00:10.407 ********* 2025-07-12 20:28:05.549890 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.549898 | orchestrator | 2025-07-12 20:28:05.549907 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-12 20:28:05.549916 | orchestrator | Saturday 12 July 2025 20:28:00 +0000 (0:00:00.150) 0:00:10.557 ********* 2025-07-12 20:28:05.549924 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:05.549933 | orchestrator | 2025-07-12 20:28:05.549942 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 20:28:05.549950 | orchestrator | Saturday 12 July 2025 20:28:00 +0000 (0:00:00.151) 0:00:10.709 ********* 2025-07-12 20:28:05.549959 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.549968 | orchestrator | 2025-07-12 20:28:05.549977 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 20:28:05.550005 | orchestrator | Saturday 12 July 2025 20:28:01 +0000 (0:00:00.243) 0:00:10.952 ********* 2025-07-12 20:28:05.550070 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:05.550080 | orchestrator | 2025-07-12 20:28:05.550089 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:28:05.550097 | orchestrator | Saturday 12 July 2025 20:28:01 +0000 (0:00:00.243) 0:00:11.196 ********* 2025-07-12 20:28:05.550106 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.550114 | orchestrator | 2025-07-12 20:28:05.550123 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:28:05.550132 | orchestrator | Saturday 12 July 2025 20:28:02 +0000 (0:00:01.296) 0:00:12.492 ********* 2025-07-12 20:28:05.550140 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.550149 | orchestrator | 2025-07-12 20:28:05.550157 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:28:05.550166 | orchestrator | Saturday 12 July 2025 20:28:02 +0000 (0:00:00.251) 0:00:12.744 ********* 2025-07-12 20:28:05.550174 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.550183 | orchestrator | 2025-07-12 20:28:05.550192 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:05.550200 | orchestrator | Saturday 12 July 2025 20:28:03 +0000 (0:00:00.252) 0:00:12.996 ********* 2025-07-12 20:28:05.550208 | orchestrator | 2025-07-12 20:28:05.550217 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:05.550226 | orchestrator | Saturday 12 July 2025 20:28:03 +0000 (0:00:00.069) 0:00:13.066 ********* 2025-07-12 20:28:05.550234 | orchestrator | 2025-07-12 20:28:05.550243 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:05.550251 | orchestrator | Saturday 12 July 2025 20:28:03 +0000 (0:00:00.066) 0:00:13.133 ********* 2025-07-12 20:28:05.550260 | orchestrator | 2025-07-12 20:28:05.550268 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 20:28:05.550277 | orchestrator | Saturday 12 July 2025 20:28:03 +0000 (0:00:00.072) 0:00:13.205 ********* 2025-07-12 20:28:05.550285 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:05.550294 | orchestrator | 2025-07-12 20:28:05.550302 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:28:05.550311 | orchestrator | Saturday 12 July 2025 20:28:05 +0000 (0:00:01.700) 0:00:14.906 ********* 2025-07-12 20:28:05.550319 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 20:28:05.550328 | orchestrator |  "msg": [ 2025-07-12 20:28:05.550336 | orchestrator |  "Validator run completed.", 2025-07-12 20:28:05.550378 | orchestrator |  "You can find the report file here:", 2025-07-12 20:28:05.550390 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-12T20:27:51+00:00-report.json", 2025-07-12 20:28:05.550399 | orchestrator |  "on the following host:", 2025-07-12 20:28:05.550408 | orchestrator |  "testbed-manager" 2025-07-12 20:28:05.550417 | orchestrator |  ] 2025-07-12 20:28:05.550426 | orchestrator | } 2025-07-12 20:28:05.550435 | orchestrator | 2025-07-12 20:28:05.550443 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:05.550453 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:28:05.550463 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:28:05.550480 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:28:05.876406 | orchestrator | 2025-07-12 20:28:05.876512 | orchestrator | 2025-07-12 20:28:05.876527 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:05.878180 | orchestrator | Saturday 12 July 2025 20:28:05 +0000 (0:00:00.446) 0:00:15.353 ********* 2025-07-12 20:28:05.878220 | orchestrator | =============================================================================== 2025-07-12 20:28:05.878231 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2025-07-12 20:28:05.878242 | orchestrator | Write report file ------------------------------------------------------- 1.70s 2025-07-12 20:28:05.878253 | orchestrator | Aggregate test results step one ----------------------------------------- 1.30s 2025-07-12 20:28:05.878264 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-07-12 20:28:05.878275 | orchestrator | Create report output directory ------------------------------------------ 0.85s 2025-07-12 20:28:05.878286 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.78s 2025-07-12 20:28:05.878296 | orchestrator | Aggregate test results step one ----------------------------------------- 0.69s 2025-07-12 20:28:05.878307 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-07-12 20:28:05.878318 | orchestrator | Set test result to passed if container is existing ---------------------- 0.54s 2025-07-12 20:28:05.878328 | orchestrator | Print report file information ------------------------------------------- 0.45s 2025-07-12 20:28:05.878339 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.33s 2025-07-12 20:28:05.878385 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-07-12 20:28:05.878403 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2025-07-12 20:28:05.878414 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-07-12 20:28:05.878425 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-07-12 20:28:05.878456 | orchestrator | Print report file information ------------------------------------------- 0.27s 2025-07-12 20:28:05.878467 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-07-12 20:28:05.878478 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-07-12 20:28:05.878489 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-07-12 20:28:05.878499 | orchestrator | Define report vars ------------------------------------------------------ 0.26s 2025-07-12 20:28:06.200938 | orchestrator | + osism validate ceph-osds 2025-07-12 20:28:27.026010 | orchestrator | 2025-07-12 20:28:27.026172 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-12 20:28:27.026189 | orchestrator | 2025-07-12 20:28:27.026201 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 20:28:27.026212 | orchestrator | Saturday 12 July 2025 20:28:22 +0000 (0:00:00.442) 0:00:00.442 ********* 2025-07-12 20:28:27.026224 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:27.026235 | orchestrator | 2025-07-12 20:28:27.026246 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:28:27.026256 | orchestrator | Saturday 12 July 2025 20:28:23 +0000 (0:00:00.679) 0:00:01.121 ********* 2025-07-12 20:28:27.026267 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:27.026278 | orchestrator | 2025-07-12 20:28:27.026290 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 20:28:27.026301 | orchestrator | Saturday 12 July 2025 20:28:23 +0000 (0:00:00.244) 0:00:01.366 ********* 2025-07-12 20:28:27.026312 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:27.026322 | orchestrator | 2025-07-12 20:28:27.026333 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 20:28:27.026344 | orchestrator | Saturday 12 July 2025 20:28:24 +0000 (0:00:01.067) 0:00:02.434 ********* 2025-07-12 20:28:27.026380 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:27.026392 | orchestrator | 2025-07-12 20:28:27.026403 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 20:28:27.026440 | orchestrator | Saturday 12 July 2025 20:28:24 +0000 (0:00:00.126) 0:00:02.561 ********* 2025-07-12 20:28:27.026452 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:27.026462 | orchestrator | 2025-07-12 20:28:27.026473 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 20:28:27.026484 | orchestrator | Saturday 12 July 2025 20:28:24 +0000 (0:00:00.121) 0:00:02.682 ********* 2025-07-12 20:28:27.026494 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:27.026505 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:27.026515 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:27.026528 | orchestrator | 2025-07-12 20:28:27.026540 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 20:28:27.026552 | orchestrator | Saturday 12 July 2025 20:28:25 +0000 (0:00:00.324) 0:00:03.006 ********* 2025-07-12 20:28:27.026565 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:27.026577 | orchestrator | 2025-07-12 20:28:27.026590 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 20:28:27.026602 | orchestrator | Saturday 12 July 2025 20:28:25 +0000 (0:00:00.159) 0:00:03.166 ********* 2025-07-12 20:28:27.026614 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:27.026627 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:27.026639 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:27.026650 | orchestrator | 2025-07-12 20:28:27.026662 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-12 20:28:27.026675 | orchestrator | Saturday 12 July 2025 20:28:25 +0000 (0:00:00.313) 0:00:03.480 ********* 2025-07-12 20:28:27.026687 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:27.026700 | orchestrator | 2025-07-12 20:28:27.026712 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:28:27.026737 | orchestrator | Saturday 12 July 2025 20:28:26 +0000 (0:00:00.577) 0:00:04.058 ********* 2025-07-12 20:28:27.026750 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:27.026762 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:27.026775 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:27.026787 | orchestrator | 2025-07-12 20:28:27.026799 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-12 20:28:27.026812 | orchestrator | Saturday 12 July 2025 20:28:26 +0000 (0:00:00.482) 0:00:04.541 ********* 2025-07-12 20:28:27.026826 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1de67c039d83d0fbd38ea00710ab23e54e3577329d4704e6a99e3ff5730c799e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-12 20:28:27.026843 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6cbda65199967e19a63dc79db7b8cda2f3e3df376d3938ac803810a3d6314a58', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:28:27.026856 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b3b00e03570fe57f07e8489398f672008377499d746fd7615183388d29629b0a', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:28:27.026871 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dc1a5eb6280dbb7d80438714cb083bd142ef4b731154631a5918f1125b8b87d9', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:28:27.026891 | orchestrator | skipping: [testbed-node-3] => (item={'id': '480fdc9065c65ae3cb58804c167c32b06a820b74e3bc584c1bcd5223068c80d8', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 20:28:27.026923 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1bf52a51568cf6fedde2857c88e4e0dbc7b2111465dbd5a1f62e06ffba50ba57', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 20:28:27.026993 | orchestrator | skipping: [testbed-node-3] => (item={'id': '15ce6b758fd10544fdb89bff4d8aff15dcffb371c67605e2b3476c7684c4cc07', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:28:27.027005 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cc5910fe424b8a4d193542a35cd93ebfd1804b303607ac46864f2a9df1dec132', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:28:27.027016 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'af06a03eb8a54671d88158594cf30a1610c93fcbcceab6d6fcd832e966728f14', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 20:28:27.027031 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b1110bcaa623601461dc852f7907b45c85cfbfe3e7bc9ca0050a0d38960eec82', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-12 20:28:27.027042 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ed52373046ffd4adde082159184cb02608940d23f654aa1244a87304288026b5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 20:28:27.027053 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2b2501eea16ada0b04ba979877fb697d1ed41957723b395290dc4ac24954d140', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 20:28:27.027064 | orchestrator | ok: [testbed-node-3] => (item={'id': '7b24f60e33487ab4f2c2fdd835149e6f28fa1277c702a6b8088ae77d61022679', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 20:28:27.027076 | orchestrator | ok: [testbed-node-3] => (item={'id': 'fa95aa55bcdc721d0cb220b01975cf241422456f73602ecbe5cf86b3bf758d06', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 20:28:27.027092 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ca6c5f45647fd1addabf605c35dfa5ebeaf77e6f800ebb12a90ccdfcd15909ad', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-12 20:28:27.027104 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'afa9b6223bb7a2d69439324b6b481e21b6a008efb8eaaad49c9f1f98bb208478', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 20:28:27.027114 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a8a9ca36e6a89e9c55cff4254102f2cc2c0d2dc2a4c13ef60f3fd5ca9f98718d', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 20:28:27.027129 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd6aa896f737dea4adf1a8a24824479a09f3ffd0509ee85b57db7db42ada44856', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 20:28:27.027147 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8208e17e0fb7e7f129753997156705d4e124473fd65deffb9573fe07c578309a', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 20:28:27.027166 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8fff0f48e425ed48a16698eaf5ffe0d8b30b8529153e031f87f69a50c67d04c1', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-12 20:28:27.027195 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'afb56383a5cd74ac2430ce9e036565aa5bc3996b5466aed66a8984ab8e227254', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-12 20:28:27.027225 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8392d045b4f128063c102d7eb6ab32018de7d7c5e788d48b25773279f35be796', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:28:27.292604 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b2a70b372a54be6b95839a41df0eee2c8e46349f17071338e2251ef769c47490', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:28:27.292710 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dce3546d20b38412de6d89ceefbccc5dc0d2a5d8717c54dbe69760e22275915f', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:28:27.292728 | orchestrator | skipping: [testbed-node-4] => (item={'id': '48c1d6ca76f712e24182e9cfe8bfc29b1ca42f5d722727f2c55e5b19c05f54d1', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 20:28:27.292742 | orchestrator | skipping: [testbed-node-4] => (item={'id': '09a854c9fc628c49c78c2dadafa524516b87a75f806d8a16f092728a2bcaed98', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 20:28:27.292753 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6ca103f94aed7e186ba6cdffe691847b197bc69c55519fc0b2f3867cb836ac42', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:28:27.292764 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5b97707d0675d7b22d2c12c00985c319b3b589dc8f1ea40f7a4492bd2a0e50e4', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:28:27.292775 | orchestrator | skipping: [testbed-node-4] => (item={'id': '474804cee256d30b87d19b9749dda0a1a90f9c5192ca640819c9a307470bac84', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 20:28:27.292787 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0477d3ec04373c9598c3b5b47ba1729d21813083771f1e0f85ab5af1e15db120', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-12 20:28:27.292798 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5e196914f4d167f7f25e7b06ab99416e27b80666ac90894ce607419b254ef59e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 20:28:27.292809 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a8d38a4e88099a9551d2785ac48d712ee5f425994900e698e96e41d92c2cc91', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 20:28:27.292822 | orchestrator | ok: [testbed-node-4] => (item={'id': '5659a48a4bd9884d4aaf7fc59e0f03b3ca460f12d8a1f0b24eafe2a63db804e4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 20:28:27.292834 | orchestrator | ok: [testbed-node-4] => (item={'id': '22db37dda581292ec0fbd9490e118f4262341bd4e51aeb3c5c5d03c8b0552ca6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 20:28:27.292867 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a46c202d3bafd5b4417e13b7dc8d5e3a066725b7fc93c3846be5f71ea1d8f7a0', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-12 20:28:27.292895 | orchestrator | skipping: [testbed-node-4] => (item={'id': '98c0ea560d62bf1a9c5a8fc41d2062ba57406913248af237295da4247c94078f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 20:28:27.292907 | orchestrator | skipping: [testbed-node-4] => (item={'id': '18df89e1d8f25e9034d62d85ef6e193a7ec538304f0dc2278c10129ffa3eca5c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 20:28:27.292936 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90d95f5a9859cdbf12dc847350bbb7e76bf1b99484b71bd317880f551124f8ad', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 20:28:27.292948 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9ef60d2751c57e1c073f852c0a0daba3e186f10a4b7b9095454c7d8c8b884be2', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 20:28:27.292959 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f30e360a6ebcc75ea63e08024251c110befa73ba0b6d73126431873324567d06', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-12 20:28:27.292970 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bf83450cc4223edb79ef9e71fa869140f0b201e39cfeae78dabeb47b91538420', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-07-12 20:28:27.292981 | orchestrator | skipping: [testbed-node-5] => (item={'id': '96b2cc8f9de3acaaa15bbbaccbe9b6a7807d9da7f773b8e24dc94d973d5fb2c6', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:28:27.292992 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2f05c1bf9c86133cc2506de10b4e9c46eca0c03045ca8e32c50ab0a8452da86d', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:28:27.293003 | orchestrator | skipping: [testbed-node-5] => (item={'id': '172189b5c9735498e7fa927ad1d50d2bb46c9e548618701483e000e73f3bcf14', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:28:27.293015 | orchestrator | skipping: [testbed-node-5] => (item={'id': '47b87a753c1a7e3a89dd6e21c0ce7b78eda8f348e6072dab9ca6527830d29c1f', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 20:28:27.293031 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b615c19ad2a18f24a660491be70555cd1154522c76f8cfbeed5116946f8d5d16', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-12 20:28:27.293042 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5954027efc059f1cb3f9d6451323e4bed6acd0331c319f304b1dee59b33abc81', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:28:27.293053 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'be4a3a54148e53f9c2806fc936ae97184e6badf9ae969c7f98e3b9669230101c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:28:27.293075 | orchestrator | skipping: [testbed-node-5] => (item={'id': '800fa37306bbdad58fc7b231b807834be02b35b19762f45b48d7a2168a6196f9', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 20:28:27.293086 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5cb3d0695a1fd44ba74caecff54c27f59a377a4a9ce7ec9d073a2a08f14a354a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-12 20:28:27.293097 | orchestrator | skipping: [testbed-node-5] => (item={'id': '89146bef2a517bac1a4f9466762c36880f40e11a7d000c9e8b3518e9bd820a58', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 20:28:27.293108 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c35e012b26b33f2c30a630b8ae140c199b4ec4648d5cd1fdcc24bb1461e876ee', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 20:28:27.293126 | orchestrator | ok: [testbed-node-5] => (item={'id': '244d41be5a9ecb794927b53738ea894dcde57f16417f0bab1d14dc65d5424f36', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 20:28:35.718140 | orchestrator | ok: [testbed-node-5] => (item={'id': '8cad696c3193b63e242e4f9527e24f56d64b7c735a2e077e9ae031dbef92f57d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-12 20:28:35.718259 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd95841dedec3d832a3bd0419047353b38d5340230d0b0e5c7d8aad6bfbe1924', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-12 20:28:35.718277 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6106a09c474683ca56dc2bd556622a5e57108d97d3dcd114e8b320c51607189a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 20:28:35.718291 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b35d8b1a40b47c5027a9f24865324665a0708e08df4e6088e345f27ae9c05bb7', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-07-12 20:28:35.718304 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bc27b227ddc29b5f2254d175698b66f9da0ae56af7fc42802164ccf94cb0fb61', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 20:28:35.718315 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1ffa361e488711b092579c73befcd8d8441a0d4caccf3852b77c0461ba80f7af', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 29 minutes'})  2025-07-12 20:28:35.718326 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e8ffd4cc7a18c5a6257f9ed94f98c522e7a0bd2e0cdc9ab1e3936a240084b25b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-12 20:28:35.718338 | orchestrator | 2025-07-12 20:28:35.718396 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-12 20:28:35.718410 | orchestrator | Saturday 12 July 2025 20:28:27 +0000 (0:00:00.540) 0:00:05.081 ********* 2025-07-12 20:28:35.718421 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.718432 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.718443 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.718454 | orchestrator | 2025-07-12 20:28:35.718481 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-12 20:28:35.718516 | orchestrator | Saturday 12 July 2025 20:28:27 +0000 (0:00:00.297) 0:00:05.379 ********* 2025-07-12 20:28:35.718528 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.718540 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:35.718551 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:35.718561 | orchestrator | 2025-07-12 20:28:35.718572 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-12 20:28:35.718583 | orchestrator | Saturday 12 July 2025 20:28:27 +0000 (0:00:00.310) 0:00:05.689 ********* 2025-07-12 20:28:35.718594 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.718604 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.718615 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.718628 | orchestrator | 2025-07-12 20:28:35.718639 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:28:35.718652 | orchestrator | Saturday 12 July 2025 20:28:28 +0000 (0:00:00.496) 0:00:06.186 ********* 2025-07-12 20:28:35.718664 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.718676 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.718688 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.718700 | orchestrator | 2025-07-12 20:28:35.718712 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-12 20:28:35.718724 | orchestrator | Saturday 12 July 2025 20:28:28 +0000 (0:00:00.321) 0:00:06.507 ********* 2025-07-12 20:28:35.718736 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-12 20:28:35.718750 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-12 20:28:35.718762 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.718774 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-12 20:28:35.718787 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-12 20:28:35.718799 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:35.718812 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-12 20:28:35.718824 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-12 20:28:35.718837 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:35.718849 | orchestrator | 2025-07-12 20:28:35.718862 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-12 20:28:35.718875 | orchestrator | Saturday 12 July 2025 20:28:29 +0000 (0:00:00.326) 0:00:06.834 ********* 2025-07-12 20:28:35.718887 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.718898 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.718909 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.718920 | orchestrator | 2025-07-12 20:28:35.718949 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 20:28:35.718961 | orchestrator | Saturday 12 July 2025 20:28:29 +0000 (0:00:00.367) 0:00:07.202 ********* 2025-07-12 20:28:35.718971 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.718982 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:35.718993 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:35.719003 | orchestrator | 2025-07-12 20:28:35.719014 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 20:28:35.719025 | orchestrator | Saturday 12 July 2025 20:28:29 +0000 (0:00:00.490) 0:00:07.692 ********* 2025-07-12 20:28:35.719035 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719046 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:35.719057 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:35.719067 | orchestrator | 2025-07-12 20:28:35.719078 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-12 20:28:35.719088 | orchestrator | Saturday 12 July 2025 20:28:30 +0000 (0:00:00.309) 0:00:08.001 ********* 2025-07-12 20:28:35.719099 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719168 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.719180 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.719191 | orchestrator | 2025-07-12 20:28:35.719202 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:28:35.719213 | orchestrator | Saturday 12 July 2025 20:28:30 +0000 (0:00:00.328) 0:00:08.329 ********* 2025-07-12 20:28:35.719223 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719234 | orchestrator | 2025-07-12 20:28:35.719245 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:28:35.719255 | orchestrator | Saturday 12 July 2025 20:28:30 +0000 (0:00:00.285) 0:00:08.615 ********* 2025-07-12 20:28:35.719266 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719277 | orchestrator | 2025-07-12 20:28:35.719288 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:28:35.719298 | orchestrator | Saturday 12 July 2025 20:28:31 +0000 (0:00:00.261) 0:00:08.877 ********* 2025-07-12 20:28:35.719309 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719320 | orchestrator | 2025-07-12 20:28:35.719330 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:35.719341 | orchestrator | Saturday 12 July 2025 20:28:31 +0000 (0:00:00.236) 0:00:09.113 ********* 2025-07-12 20:28:35.719398 | orchestrator | 2025-07-12 20:28:35.719410 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:35.719420 | orchestrator | Saturday 12 July 2025 20:28:31 +0000 (0:00:00.067) 0:00:09.180 ********* 2025-07-12 20:28:35.719431 | orchestrator | 2025-07-12 20:28:35.719441 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:35.719452 | orchestrator | Saturday 12 July 2025 20:28:31 +0000 (0:00:00.063) 0:00:09.243 ********* 2025-07-12 20:28:35.719462 | orchestrator | 2025-07-12 20:28:35.719473 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:28:35.719483 | orchestrator | Saturday 12 July 2025 20:28:31 +0000 (0:00:00.262) 0:00:09.506 ********* 2025-07-12 20:28:35.719494 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719505 | orchestrator | 2025-07-12 20:28:35.719515 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-12 20:28:35.719526 | orchestrator | Saturday 12 July 2025 20:28:31 +0000 (0:00:00.247) 0:00:09.753 ********* 2025-07-12 20:28:35.719536 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719547 | orchestrator | 2025-07-12 20:28:35.719558 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:28:35.719569 | orchestrator | Saturday 12 July 2025 20:28:32 +0000 (0:00:00.275) 0:00:10.028 ********* 2025-07-12 20:28:35.719579 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719590 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.719600 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.719611 | orchestrator | 2025-07-12 20:28:35.719622 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-12 20:28:35.719632 | orchestrator | Saturday 12 July 2025 20:28:32 +0000 (0:00:00.304) 0:00:10.332 ********* 2025-07-12 20:28:35.719643 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719654 | orchestrator | 2025-07-12 20:28:35.719664 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-12 20:28:35.719675 | orchestrator | Saturday 12 July 2025 20:28:32 +0000 (0:00:00.248) 0:00:10.581 ********* 2025-07-12 20:28:35.719685 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:28:35.719696 | orchestrator | 2025-07-12 20:28:35.719707 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-12 20:28:35.719717 | orchestrator | Saturday 12 July 2025 20:28:34 +0000 (0:00:01.679) 0:00:12.260 ********* 2025-07-12 20:28:35.719728 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719739 | orchestrator | 2025-07-12 20:28:35.719749 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-12 20:28:35.719760 | orchestrator | Saturday 12 July 2025 20:28:34 +0000 (0:00:00.127) 0:00:12.388 ********* 2025-07-12 20:28:35.719778 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719788 | orchestrator | 2025-07-12 20:28:35.719799 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-12 20:28:35.719810 | orchestrator | Saturday 12 July 2025 20:28:34 +0000 (0:00:00.299) 0:00:12.688 ********* 2025-07-12 20:28:35.719820 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:35.719831 | orchestrator | 2025-07-12 20:28:35.719841 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-12 20:28:35.719852 | orchestrator | Saturday 12 July 2025 20:28:34 +0000 (0:00:00.121) 0:00:12.810 ********* 2025-07-12 20:28:35.719863 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719873 | orchestrator | 2025-07-12 20:28:35.719884 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:28:35.719895 | orchestrator | Saturday 12 July 2025 20:28:35 +0000 (0:00:00.144) 0:00:12.954 ********* 2025-07-12 20:28:35.719906 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:35.719916 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:35.719927 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:35.719937 | orchestrator | 2025-07-12 20:28:35.719949 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-12 20:28:35.719967 | orchestrator | Saturday 12 July 2025 20:28:35 +0000 (0:00:00.563) 0:00:13.518 ********* 2025-07-12 20:28:49.032055 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:28:49.032214 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:28:49.032226 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:28:49.032241 | orchestrator | 2025-07-12 20:28:49.032249 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-12 20:28:49.032257 | orchestrator | Saturday 12 July 2025 20:28:38 +0000 (0:00:02.444) 0:00:15.963 ********* 2025-07-12 20:28:49.032263 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032270 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032276 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032282 | orchestrator | 2025-07-12 20:28:49.032289 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-12 20:28:49.032295 | orchestrator | Saturday 12 July 2025 20:28:38 +0000 (0:00:00.306) 0:00:16.269 ********* 2025-07-12 20:28:49.032301 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032307 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032313 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032319 | orchestrator | 2025-07-12 20:28:49.032326 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-12 20:28:49.032332 | orchestrator | Saturday 12 July 2025 20:28:38 +0000 (0:00:00.473) 0:00:16.742 ********* 2025-07-12 20:28:49.032338 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:49.032371 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:49.032378 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:49.032384 | orchestrator | 2025-07-12 20:28:49.032430 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-12 20:28:49.032437 | orchestrator | Saturday 12 July 2025 20:28:39 +0000 (0:00:00.512) 0:00:17.255 ********* 2025-07-12 20:28:49.032444 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032450 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032456 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032462 | orchestrator | 2025-07-12 20:28:49.032468 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-12 20:28:49.032474 | orchestrator | Saturday 12 July 2025 20:28:39 +0000 (0:00:00.306) 0:00:17.562 ********* 2025-07-12 20:28:49.032480 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:49.032486 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:49.032492 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:49.032498 | orchestrator | 2025-07-12 20:28:49.032504 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-12 20:28:49.032510 | orchestrator | Saturday 12 July 2025 20:28:40 +0000 (0:00:00.342) 0:00:17.904 ********* 2025-07-12 20:28:49.032517 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:49.032541 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:49.032548 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:49.032554 | orchestrator | 2025-07-12 20:28:49.032560 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:28:49.032566 | orchestrator | Saturday 12 July 2025 20:28:40 +0000 (0:00:00.326) 0:00:18.231 ********* 2025-07-12 20:28:49.032572 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032578 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032584 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032590 | orchestrator | 2025-07-12 20:28:49.032600 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-12 20:28:49.032607 | orchestrator | Saturday 12 July 2025 20:28:41 +0000 (0:00:00.798) 0:00:19.030 ********* 2025-07-12 20:28:49.032614 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032621 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032628 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032635 | orchestrator | 2025-07-12 20:28:49.032642 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-12 20:28:49.032649 | orchestrator | Saturday 12 July 2025 20:28:41 +0000 (0:00:00.476) 0:00:19.506 ********* 2025-07-12 20:28:49.032656 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032662 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032669 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032676 | orchestrator | 2025-07-12 20:28:49.032683 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-12 20:28:49.032690 | orchestrator | Saturday 12 July 2025 20:28:41 +0000 (0:00:00.304) 0:00:19.811 ********* 2025-07-12 20:28:49.032697 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:49.032704 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:28:49.032711 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:28:49.032718 | orchestrator | 2025-07-12 20:28:49.032726 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-12 20:28:49.032732 | orchestrator | Saturday 12 July 2025 20:28:42 +0000 (0:00:00.318) 0:00:20.129 ********* 2025-07-12 20:28:49.032738 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:28:49.032744 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:28:49.032750 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:28:49.032756 | orchestrator | 2025-07-12 20:28:49.032762 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 20:28:49.032768 | orchestrator | Saturday 12 July 2025 20:28:42 +0000 (0:00:00.532) 0:00:20.662 ********* 2025-07-12 20:28:49.032774 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:49.032780 | orchestrator | 2025-07-12 20:28:49.032786 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 20:28:49.032792 | orchestrator | Saturday 12 July 2025 20:28:43 +0000 (0:00:00.276) 0:00:20.938 ********* 2025-07-12 20:28:49.032798 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:28:49.032804 | orchestrator | 2025-07-12 20:28:49.032810 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:28:49.032817 | orchestrator | Saturday 12 July 2025 20:28:43 +0000 (0:00:00.247) 0:00:21.185 ********* 2025-07-12 20:28:49.032823 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:49.032829 | orchestrator | 2025-07-12 20:28:49.032835 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:28:49.032841 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:01.718) 0:00:22.904 ********* 2025-07-12 20:28:49.032847 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:49.032853 | orchestrator | 2025-07-12 20:28:49.032859 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:28:49.032865 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:00.315) 0:00:23.219 ********* 2025-07-12 20:28:49.032886 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:49.032892 | orchestrator | 2025-07-12 20:28:49.032898 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:49.032910 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:00.282) 0:00:23.501 ********* 2025-07-12 20:28:49.032916 | orchestrator | 2025-07-12 20:28:49.032922 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:49.032928 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:00.074) 0:00:23.576 ********* 2025-07-12 20:28:49.032934 | orchestrator | 2025-07-12 20:28:49.032940 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:28:49.032946 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:00.082) 0:00:23.658 ********* 2025-07-12 20:28:49.032952 | orchestrator | 2025-07-12 20:28:49.032958 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 20:28:49.032968 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:00.071) 0:00:23.729 ********* 2025-07-12 20:28:49.032979 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:28:49.032989 | orchestrator | 2025-07-12 20:28:49.033000 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:28:49.033015 | orchestrator | Saturday 12 July 2025 20:28:47 +0000 (0:00:01.648) 0:00:25.378 ********* 2025-07-12 20:28:49.033028 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-12 20:28:49.033038 | orchestrator |  "msg": [ 2025-07-12 20:28:49.033049 | orchestrator |  "Validator run completed.", 2025-07-12 20:28:49.033059 | orchestrator |  "You can find the report file here:", 2025-07-12 20:28:49.033069 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-12T20:28:23+00:00-report.json", 2025-07-12 20:28:49.033081 | orchestrator |  "on the following host:", 2025-07-12 20:28:49.033092 | orchestrator |  "testbed-manager" 2025-07-12 20:28:49.033103 | orchestrator |  ] 2025-07-12 20:28:49.033114 | orchestrator | } 2025-07-12 20:28:49.033126 | orchestrator | 2025-07-12 20:28:49.033137 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:49.033150 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-12 20:28:49.033161 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:28:49.033173 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:28:49.033184 | orchestrator | 2025-07-12 20:28:49.033195 | orchestrator | 2025-07-12 20:28:49.033206 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:49.033223 | orchestrator | Saturday 12 July 2025 20:28:48 +0000 (0:00:01.079) 0:00:26.458 ********* 2025-07-12 20:28:49.033235 | orchestrator | =============================================================================== 2025-07-12 20:28:49.033246 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.44s 2025-07-12 20:28:49.033257 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2025-07-12 20:28:49.033268 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.68s 2025-07-12 20:28:49.033280 | orchestrator | Write report file ------------------------------------------------------- 1.65s 2025-07-12 20:28:49.033291 | orchestrator | Print report file information ------------------------------------------- 1.08s 2025-07-12 20:28:49.033300 | orchestrator | Create report output directory ------------------------------------------ 1.07s 2025-07-12 20:28:49.033310 | orchestrator | Prepare test data ------------------------------------------------------- 0.80s 2025-07-12 20:28:49.033320 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-07-12 20:28:49.033330 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.58s 2025-07-12 20:28:49.033340 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2025-07-12 20:28:49.033386 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2025-07-12 20:28:49.033396 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.53s 2025-07-12 20:28:49.033405 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.51s 2025-07-12 20:28:49.033414 | orchestrator | Set test result to passed if count matches ------------------------------ 0.50s 2025-07-12 20:28:49.033424 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.49s 2025-07-12 20:28:49.033435 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-07-12 20:28:49.033445 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2025-07-12 20:28:49.033456 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2025-07-12 20:28:49.033465 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2025-07-12 20:28:49.033474 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.37s 2025-07-12 20:28:49.368468 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-12 20:28:49.377553 | orchestrator | + set -e 2025-07-12 20:28:49.377616 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 20:28:49.377623 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 20:28:49.377628 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 20:28:49.377633 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 20:28:49.377638 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 20:28:49.377643 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 20:28:49.377649 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 20:28:49.377654 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 20:28:49.377659 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 20:28:49.377664 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 20:28:49.377669 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 20:28:49.377673 | orchestrator | ++ export ARA=false 2025-07-12 20:28:49.377678 | orchestrator | ++ ARA=false 2025-07-12 20:28:49.377683 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 20:28:49.377688 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 20:28:49.377693 | orchestrator | ++ export TEMPEST=false 2025-07-12 20:28:49.377698 | orchestrator | ++ TEMPEST=false 2025-07-12 20:28:49.377703 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 20:28:49.377707 | orchestrator | ++ IS_ZUUL=true 2025-07-12 20:28:49.377712 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 20:28:49.377717 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.169 2025-07-12 20:28:49.377722 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 20:28:49.377726 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 20:28:49.377731 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 20:28:49.377736 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 20:28:49.377740 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 20:28:49.377745 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 20:28:49.377750 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 20:28:49.377754 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 20:28:49.377759 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 20:28:49.377764 | orchestrator | + source /etc/os-release 2025-07-12 20:28:49.377768 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-12 20:28:49.377773 | orchestrator | ++ NAME=Ubuntu 2025-07-12 20:28:49.377777 | orchestrator | ++ VERSION_ID=24.04 2025-07-12 20:28:49.377782 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-12 20:28:49.377787 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-12 20:28:49.377791 | orchestrator | ++ ID=ubuntu 2025-07-12 20:28:49.377796 | orchestrator | ++ ID_LIKE=debian 2025-07-12 20:28:49.379304 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-12 20:28:49.379333 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-12 20:28:49.379339 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-12 20:28:49.379391 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-12 20:28:49.379399 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-12 20:28:49.379404 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-12 20:28:49.379409 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-12 20:28:49.379415 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-12 20:28:49.379423 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 20:28:49.402404 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 20:29:14.093504 | orchestrator | 2025-07-12 20:29:14.093617 | orchestrator | # Status of Elasticsearch 2025-07-12 20:29:14.093640 | orchestrator | 2025-07-12 20:29:14.093649 | orchestrator | + pushd /opt/configuration/contrib 2025-07-12 20:29:14.093667 | orchestrator | + echo 2025-07-12 20:29:14.093675 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-12 20:29:14.093683 | orchestrator | + echo 2025-07-12 20:29:14.093691 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-12 20:29:14.262690 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-12 20:29:14.263738 | orchestrator | 2025-07-12 20:29:14.263800 | orchestrator | # Status of MariaDB 2025-07-12 20:29:14.263808 | orchestrator | 2025-07-12 20:29:14.263813 | orchestrator | + echo 2025-07-12 20:29:14.263819 | orchestrator | + echo '# Status of MariaDB' 2025-07-12 20:29:14.263824 | orchestrator | + echo 2025-07-12 20:29:14.263828 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-12 20:29:14.263850 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-12 20:29:14.335999 | orchestrator | Reading package lists... 2025-07-12 20:29:14.696945 | orchestrator | Building dependency tree... 2025-07-12 20:29:14.697063 | orchestrator | Reading state information... 2025-07-12 20:29:15.133171 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-12 20:29:15.133269 | orchestrator | bc set to manually installed. 2025-07-12 20:29:15.133281 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-12 20:29:15.799049 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-12 20:29:15.799965 | orchestrator | 2025-07-12 20:29:15.800001 | orchestrator | # Status of Prometheus 2025-07-12 20:29:15.800014 | orchestrator | 2025-07-12 20:29:15.800027 | orchestrator | + echo 2025-07-12 20:29:15.800040 | orchestrator | + echo '# Status of Prometheus' 2025-07-12 20:29:15.800052 | orchestrator | + echo 2025-07-12 20:29:15.800063 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-12 20:29:15.862752 | orchestrator | Unauthorized 2025-07-12 20:29:15.866785 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-12 20:29:15.935790 | orchestrator | Unauthorized 2025-07-12 20:29:15.944920 | orchestrator | 2025-07-12 20:29:15.945047 | orchestrator | # Status of RabbitMQ 2025-07-12 20:29:15.945062 | orchestrator | 2025-07-12 20:29:15.945073 | orchestrator | + echo 2025-07-12 20:29:15.945084 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-12 20:29:15.945095 | orchestrator | + echo 2025-07-12 20:29:15.945106 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-12 20:29:16.463379 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-12 20:29:16.472818 | orchestrator | 2025-07-12 20:29:16.472904 | orchestrator | # Status of Redis 2025-07-12 20:29:16.472916 | orchestrator | 2025-07-12 20:29:16.472925 | orchestrator | + echo 2025-07-12 20:29:16.472934 | orchestrator | + echo '# Status of Redis' 2025-07-12 20:29:16.472944 | orchestrator | + echo 2025-07-12 20:29:16.472954 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-12 20:29:16.478982 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001870s;;;0.000000;10.000000 2025-07-12 20:29:16.479228 | orchestrator | 2025-07-12 20:29:16.479251 | orchestrator | # Create backup of MariaDB database 2025-07-12 20:29:16.479266 | orchestrator | 2025-07-12 20:29:16.479282 | orchestrator | + popd 2025-07-12 20:29:16.479296 | orchestrator | + echo 2025-07-12 20:29:16.479311 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-12 20:29:16.479324 | orchestrator | + echo 2025-07-12 20:29:16.479339 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-12 20:29:18.407990 | orchestrator | 2025-07-12 20:29:18 | INFO  | Task e1c15fb9-ae96-483f-b602-a2e74a203541 (mariadb_backup) was prepared for execution. 2025-07-12 20:29:18.408155 | orchestrator | 2025-07-12 20:29:18 | INFO  | It takes a moment until task e1c15fb9-ae96-483f-b602-a2e74a203541 (mariadb_backup) has been started and output is visible here. 2025-07-12 20:32:26.934874 | orchestrator | 2025-07-12 20:32:26.934989 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:32:26.935007 | orchestrator | 2025-07-12 20:32:26.935019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:32:26.935031 | orchestrator | Saturday 12 July 2025 20:29:22 +0000 (0:00:00.194) 0:00:00.194 ********* 2025-07-12 20:32:26.935042 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:32:26.935054 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:32:26.935065 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:32:26.935076 | orchestrator | 2025-07-12 20:32:26.935088 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:32:26.935099 | orchestrator | Saturday 12 July 2025 20:29:22 +0000 (0:00:00.326) 0:00:00.521 ********* 2025-07-12 20:32:26.935110 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 20:32:26.935121 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 20:32:26.935132 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 20:32:26.935143 | orchestrator | 2025-07-12 20:32:26.935154 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 20:32:26.935165 | orchestrator | 2025-07-12 20:32:26.935176 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 20:32:26.935187 | orchestrator | Saturday 12 July 2025 20:29:23 +0000 (0:00:00.613) 0:00:01.134 ********* 2025-07-12 20:32:26.935198 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:32:26.935209 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:32:26.935220 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:32:26.935232 | orchestrator | 2025-07-12 20:32:26.935243 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:32:26.935254 | orchestrator | Saturday 12 July 2025 20:29:24 +0000 (0:00:00.420) 0:00:01.555 ********* 2025-07-12 20:32:26.935265 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:32:26.935277 | orchestrator | 2025-07-12 20:32:26.935288 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-12 20:32:26.935298 | orchestrator | Saturday 12 July 2025 20:29:24 +0000 (0:00:00.550) 0:00:02.105 ********* 2025-07-12 20:32:26.935309 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:32:26.935320 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:32:26.935331 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:32:26.935342 | orchestrator | 2025-07-12 20:32:26.935353 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-12 20:32:26.935364 | orchestrator | Saturday 12 July 2025 20:29:27 +0000 (0:00:03.268) 0:00:05.374 ********* 2025-07-12 20:32:26.935375 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:26.935386 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:26.935398 | orchestrator | 2025-07-12 20:32:26.935411 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-07-12 20:32:26.935423 | orchestrator | 2025-07-12 20:32:26.935435 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-07-12 20:32:26.935447 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 20:32:26.935467 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-12 20:32:26.935488 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:32:26.935509 | orchestrator | mariadb_bootstrap_restart 2025-07-12 20:32:26.935529 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:26.935550 | orchestrator | 2025-07-12 20:32:26.935572 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 20:32:26.935658 | orchestrator | skipping: no hosts matched 2025-07-12 20:32:26.935679 | orchestrator | 2025-07-12 20:32:26.935698 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:32:26.935710 | orchestrator | skipping: no hosts matched 2025-07-12 20:32:26.935720 | orchestrator | 2025-07-12 20:32:26.935732 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 20:32:26.935743 | orchestrator | skipping: no hosts matched 2025-07-12 20:32:26.935753 | orchestrator | 2025-07-12 20:32:26.935764 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 20:32:26.935775 | orchestrator | 2025-07-12 20:32:26.935786 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 20:32:26.935796 | orchestrator | Saturday 12 July 2025 20:32:25 +0000 (0:02:57.937) 0:03:03.312 ********* 2025-07-12 20:32:26.935807 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:26.935818 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:26.935829 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:26.935839 | orchestrator | 2025-07-12 20:32:26.935850 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 20:32:26.935861 | orchestrator | Saturday 12 July 2025 20:32:26 +0000 (0:00:00.314) 0:03:03.627 ********* 2025-07-12 20:32:26.935871 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:26.935993 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:26.936014 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:26.936032 | orchestrator | 2025-07-12 20:32:26.936048 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:32:26.936065 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:32:26.936082 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:32:26.936099 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:32:26.936118 | orchestrator | 2025-07-12 20:32:26.936136 | orchestrator | 2025-07-12 20:32:26.936155 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:32:26.936174 | orchestrator | Saturday 12 July 2025 20:32:26 +0000 (0:00:00.470) 0:03:04.097 ********* 2025-07-12 20:32:26.936217 | orchestrator | =============================================================================== 2025-07-12 20:32:26.936229 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 177.94s 2025-07-12 20:32:26.936240 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.27s 2025-07-12 20:32:26.936251 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-07-12 20:32:26.936280 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2025-07-12 20:32:26.936292 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.47s 2025-07-12 20:32:26.936302 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.42s 2025-07-12 20:32:26.936313 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-07-12 20:32:26.936324 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-07-12 20:32:27.297830 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-12 20:32:27.310405 | orchestrator | + set -e 2025-07-12 20:32:27.310489 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:32:27.310499 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:32:27.310508 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:32:27.310515 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:32:27.310522 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:32:27.310538 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 20:32:27.311368 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 20:32:27.314838 | orchestrator | 2025-07-12 20:32:27.314898 | orchestrator | # OpenStack endpoints 2025-07-12 20:32:27.314912 | orchestrator | 2025-07-12 20:32:27.314925 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-12 20:32:27.314937 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-12 20:32:27.314948 | orchestrator | + export OS_CLOUD=admin 2025-07-12 20:32:27.314960 | orchestrator | + OS_CLOUD=admin 2025-07-12 20:32:27.314972 | orchestrator | + echo 2025-07-12 20:32:27.314984 | orchestrator | + echo '# OpenStack endpoints' 2025-07-12 20:32:27.314996 | orchestrator | + echo 2025-07-12 20:32:27.315009 | orchestrator | + openstack endpoint list 2025-07-12 20:32:31.234171 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 20:32:31.234281 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-12 20:32:31.234295 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 20:32:31.234307 | orchestrator | | 149796ef2512492596bde11bbe4e0fc8 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 20:32:31.234318 | orchestrator | | 19df865c0b4745298057c6b8258c4ef7 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-12 20:32:31.234329 | orchestrator | | 1f23bb4b497e448f9f871b1fcdf05b40 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 20:32:31.234356 | orchestrator | | 24556c83cc71428ab53e3e60d03fd1bc | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-12 20:32:31.234368 | orchestrator | | 2c127d64efec4a6b9c6e8a703d4e7d08 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-12 20:32:31.234379 | orchestrator | | 482d779974c94a2b9c6e1cd54da9c3a0 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-12 20:32:31.234390 | orchestrator | | 59d07b460cae49f5b25cf6046042af4f | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-12 20:32:31.234401 | orchestrator | | 6261f1d49f6548a49cf40a80a611ad2a | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-12 20:32:31.234412 | orchestrator | | 7813157716254a98942171e5522baa33 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-12 20:32:31.234422 | orchestrator | | 814da8d2db314cebb238b0ddbb0a1ec9 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 20:32:31.234433 | orchestrator | | 824fc27ee45b4997b6d9c4fa39282342 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-12 20:32:31.234444 | orchestrator | | 842b02ddb67947569ad736e50138f4eb | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-12 20:32:31.234455 | orchestrator | | 8960c887d5ad408ea984486d5f5a2368 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-12 20:32:31.234478 | orchestrator | | 93b740df4a3e42eba8a34e06d30040d0 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-12 20:32:31.234489 | orchestrator | | 94ef9c98d4f24854bd08380e3519b7c0 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-12 20:32:31.234523 | orchestrator | | ae5d235b690042de835778fb1173f0f8 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-12 20:32:31.234535 | orchestrator | | bca715488b40411d945e10274920da65 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-12 20:32:31.234546 | orchestrator | | d72f2dfd662a496eab29cab0904a2c17 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-12 20:32:31.234556 | orchestrator | | dddd7905eeee436b96af505982983a62 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-12 20:32:31.234567 | orchestrator | | ee9256e44e6e4b4b9f7b6411feded5ce | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 20:32:31.234595 | orchestrator | | fdf325ccc3a244e3a2df6d48fcaca2c8 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-12 20:32:31.234606 | orchestrator | | ffa5e2155eb6468988b4c8cb903fa9ca | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-12 20:32:31.234617 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 20:32:31.510204 | orchestrator | 2025-07-12 20:32:31.510306 | orchestrator | # Cinder 2025-07-12 20:32:31.510320 | orchestrator | 2025-07-12 20:32:31.510332 | orchestrator | + echo 2025-07-12 20:32:31.510345 | orchestrator | + echo '# Cinder' 2025-07-12 20:32:31.510363 | orchestrator | + echo 2025-07-12 20:32:31.510382 | orchestrator | + openstack volume service list 2025-07-12 20:32:34.244215 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 20:32:34.244303 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 20:32:34.244314 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 20:32:34.244322 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T20:32:30.000000 | 2025-07-12 20:32:34.244329 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T20:32:33.000000 | 2025-07-12 20:32:34.244355 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T20:32:24.000000 | 2025-07-12 20:32:34.244363 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-12T20:32:28.000000 | 2025-07-12 20:32:34.244371 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-12T20:32:32.000000 | 2025-07-12 20:32:34.244379 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-12T20:32:34.000000 | 2025-07-12 20:32:34.244386 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-12T20:32:25.000000 | 2025-07-12 20:32:34.244394 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-12T20:32:25.000000 | 2025-07-12 20:32:34.244400 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-12T20:32:26.000000 | 2025-07-12 20:32:34.244405 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 20:32:34.531497 | orchestrator | 2025-07-12 20:32:34.531581 | orchestrator | # Neutron 2025-07-12 20:32:34.531594 | orchestrator | 2025-07-12 20:32:34.531604 | orchestrator | + echo 2025-07-12 20:32:34.531616 | orchestrator | + echo '# Neutron' 2025-07-12 20:32:34.531663 | orchestrator | + echo 2025-07-12 20:32:34.531682 | orchestrator | + openstack network agent list 2025-07-12 20:32:37.365826 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 20:32:37.365991 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-12 20:32:37.366008 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 20:32:37.366077 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-12 20:32:37.366089 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-12 20:32:37.366100 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-12 20:32:37.366111 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-12 20:32:37.366122 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-12 20:32:37.366133 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-12 20:32:37.366143 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 20:32:37.366154 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 20:32:37.366165 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 20:32:37.366176 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 20:32:37.658448 | orchestrator | + openstack network service provider list 2025-07-12 20:32:40.233805 | orchestrator | +---------------+------+---------+ 2025-07-12 20:32:40.233919 | orchestrator | | Service Type | Name | Default | 2025-07-12 20:32:40.233935 | orchestrator | +---------------+------+---------+ 2025-07-12 20:32:40.233947 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-12 20:32:40.233958 | orchestrator | +---------------+------+---------+ 2025-07-12 20:32:40.538234 | orchestrator | 2025-07-12 20:32:40.538355 | orchestrator | # Nova 2025-07-12 20:32:40.538372 | orchestrator | 2025-07-12 20:32:40.538384 | orchestrator | + echo 2025-07-12 20:32:40.538395 | orchestrator | + echo '# Nova' 2025-07-12 20:32:40.538407 | orchestrator | + echo 2025-07-12 20:32:40.538418 | orchestrator | + openstack compute service list 2025-07-12 20:32:43.921590 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 20:32:43.921826 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 20:32:43.921857 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 20:32:43.921878 | orchestrator | | 7c17e0b1-a309-47f1-acc0-1097536450fa | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T20:32:35.000000 | 2025-07-12 20:32:43.921900 | orchestrator | | f4e850c3-d032-4055-8272-eac6b5f8317c | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T20:32:38.000000 | 2025-07-12 20:32:43.921921 | orchestrator | | a2b45cf2-21bb-4e6a-be14-07270795afad | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T20:32:39.000000 | 2025-07-12 20:32:43.921943 | orchestrator | | a45f2966-7684-463f-b308-5b055a9ceac5 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-12T20:32:42.000000 | 2025-07-12 20:32:43.922085 | orchestrator | | 3a76ffc7-1d13-4c3b-8102-7dd5eb6bbba3 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-12T20:32:41.000000 | 2025-07-12 20:32:43.922112 | orchestrator | | 2e3a3158-5e0d-42ff-8eda-3296e0004e92 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-12T20:32:41.000000 | 2025-07-12 20:32:43.922132 | orchestrator | | 06c507e9-709f-41ad-82e0-95b48b581823 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-12T20:32:42.000000 | 2025-07-12 20:32:43.922149 | orchestrator | | 62c8ec22-ca28-4b2c-a09d-4c16cd09b912 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-12T20:32:42.000000 | 2025-07-12 20:32:43.922168 | orchestrator | | e15d3ebe-ccc2-4e78-9193-0ee8fd2ab2b2 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-12T20:32:43.000000 | 2025-07-12 20:32:43.922187 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 20:32:44.251045 | orchestrator | + openstack hypervisor list 2025-07-12 20:32:49.102223 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 20:32:49.102328 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-12 20:32:49.102340 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 20:32:49.102349 | orchestrator | | 9ce205f1-ed02-40a2-b520-3f1bc14b00a3 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-12 20:32:49.102357 | orchestrator | | 075d1819-cd75-41f4-9f56-e4a31c7f17c4 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-12 20:32:49.102366 | orchestrator | | 135e6c13-01d6-4a53-9d70-3018c2337a91 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-12 20:32:49.102374 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 20:32:49.391075 | orchestrator | 2025-07-12 20:32:49.391175 | orchestrator | # Run OpenStack test play 2025-07-12 20:32:49.391191 | orchestrator | 2025-07-12 20:32:49.391203 | orchestrator | + echo 2025-07-12 20:32:49.391215 | orchestrator | + echo '# Run OpenStack test play' 2025-07-12 20:32:49.391227 | orchestrator | + echo 2025-07-12 20:32:49.391238 | orchestrator | + osism apply --environment openstack test 2025-07-12 20:32:51.273025 | orchestrator | 2025-07-12 20:32:51 | INFO  | Trying to run play test in environment openstack 2025-07-12 20:32:51.337927 | orchestrator | 2025-07-12 20:32:51 | INFO  | Task b385a13f-7086-4418-824b-be4fa78d2161 (test) was prepared for execution. 2025-07-12 20:32:51.338013 | orchestrator | 2025-07-12 20:32:51 | INFO  | It takes a moment until task b385a13f-7086-4418-824b-be4fa78d2161 (test) has been started and output is visible here. 2025-07-12 20:38:49.830431 | orchestrator | 2025-07-12 20:38:49.830579 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-12 20:38:49.830598 | orchestrator | 2025-07-12 20:38:49.830611 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-12 20:38:49.830623 | orchestrator | Saturday 12 July 2025 20:32:55 +0000 (0:00:00.091) 0:00:00.091 ********* 2025-07-12 20:38:49.830634 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.830646 | orchestrator | 2025-07-12 20:38:49.830657 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-12 20:38:49.830669 | orchestrator | Saturday 12 July 2025 20:32:59 +0000 (0:00:03.940) 0:00:04.031 ********* 2025-07-12 20:38:49.830679 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.830690 | orchestrator | 2025-07-12 20:38:49.830701 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-12 20:38:49.830711 | orchestrator | Saturday 12 July 2025 20:33:03 +0000 (0:00:04.373) 0:00:08.405 ********* 2025-07-12 20:38:49.830722 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.830733 | orchestrator | 2025-07-12 20:38:49.830744 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-12 20:38:49.830754 | orchestrator | Saturday 12 July 2025 20:33:10 +0000 (0:00:06.884) 0:00:15.290 ********* 2025-07-12 20:38:49.830845 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.830860 | orchestrator | 2025-07-12 20:38:49.830872 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-12 20:38:49.830883 | orchestrator | Saturday 12 July 2025 20:33:14 +0000 (0:00:04.188) 0:00:19.478 ********* 2025-07-12 20:38:49.830895 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.830908 | orchestrator | 2025-07-12 20:38:49.830920 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-12 20:38:49.830932 | orchestrator | Saturday 12 July 2025 20:33:19 +0000 (0:00:04.252) 0:00:23.731 ********* 2025-07-12 20:38:49.830945 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-12 20:38:49.830957 | orchestrator | changed: [localhost] => (item=member) 2025-07-12 20:38:49.830970 | orchestrator | changed: [localhost] => (item=creator) 2025-07-12 20:38:49.830980 | orchestrator | 2025-07-12 20:38:49.830991 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-12 20:38:49.831045 | orchestrator | Saturday 12 July 2025 20:33:32 +0000 (0:00:12.965) 0:00:36.697 ********* 2025-07-12 20:38:49.831056 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831067 | orchestrator | 2025-07-12 20:38:49.831078 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-12 20:38:49.831089 | orchestrator | Saturday 12 July 2025 20:33:36 +0000 (0:00:04.829) 0:00:41.527 ********* 2025-07-12 20:38:49.831099 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831110 | orchestrator | 2025-07-12 20:38:49.831120 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-12 20:38:49.831131 | orchestrator | Saturday 12 July 2025 20:33:42 +0000 (0:00:05.278) 0:00:46.806 ********* 2025-07-12 20:38:49.831142 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831152 | orchestrator | 2025-07-12 20:38:49.831163 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-12 20:38:49.831174 | orchestrator | Saturday 12 July 2025 20:33:46 +0000 (0:00:04.246) 0:00:51.052 ********* 2025-07-12 20:38:49.831185 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831195 | orchestrator | 2025-07-12 20:38:49.831206 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-12 20:38:49.831217 | orchestrator | Saturday 12 July 2025 20:33:50 +0000 (0:00:03.808) 0:00:54.860 ********* 2025-07-12 20:38:49.831227 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831238 | orchestrator | 2025-07-12 20:38:49.831249 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-12 20:38:49.831259 | orchestrator | Saturday 12 July 2025 20:33:54 +0000 (0:00:03.842) 0:00:58.703 ********* 2025-07-12 20:38:49.831270 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831280 | orchestrator | 2025-07-12 20:38:49.831291 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-12 20:38:49.831301 | orchestrator | Saturday 12 July 2025 20:33:58 +0000 (0:00:04.011) 0:01:02.714 ********* 2025-07-12 20:38:49.831312 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831322 | orchestrator | 2025-07-12 20:38:49.831334 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-12 20:38:49.831344 | orchestrator | Saturday 12 July 2025 20:34:12 +0000 (0:00:14.755) 0:01:17.470 ********* 2025-07-12 20:38:49.831355 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 20:38:49.831366 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 20:38:49.831377 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 20:38:49.831387 | orchestrator | 2025-07-12 20:38:49.831398 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 20:38:49.831408 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 20:38:49.831419 | orchestrator | 2025-07-12 20:38:49.831429 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 20:38:49.831440 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 20:38:49.831450 | orchestrator | 2025-07-12 20:38:49.831461 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-12 20:38:49.831481 | orchestrator | Saturday 12 July 2025 20:37:24 +0000 (0:03:11.757) 0:04:29.228 ********* 2025-07-12 20:38:49.831492 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 20:38:49.831502 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 20:38:49.831513 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 20:38:49.831523 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 20:38:49.831534 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 20:38:49.831544 | orchestrator | 2025-07-12 20:38:49.831559 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-12 20:38:49.831570 | orchestrator | Saturday 12 July 2025 20:37:48 +0000 (0:00:24.028) 0:04:53.256 ********* 2025-07-12 20:38:49.831580 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 20:38:49.831591 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 20:38:49.831601 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 20:38:49.831612 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 20:38:49.831639 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 20:38:49.831651 | orchestrator | 2025-07-12 20:38:49.831661 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-12 20:38:49.831672 | orchestrator | Saturday 12 July 2025 20:38:22 +0000 (0:00:33.864) 0:05:27.121 ********* 2025-07-12 20:38:49.831683 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831693 | orchestrator | 2025-07-12 20:38:49.831704 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-12 20:38:49.831715 | orchestrator | Saturday 12 July 2025 20:38:30 +0000 (0:00:07.866) 0:05:34.987 ********* 2025-07-12 20:38:49.831725 | orchestrator | changed: [localhost] 2025-07-12 20:38:49.831736 | orchestrator | 2025-07-12 20:38:49.831746 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-12 20:38:49.831757 | orchestrator | Saturday 12 July 2025 20:38:44 +0000 (0:00:13.618) 0:05:48.606 ********* 2025-07-12 20:38:49.831768 | orchestrator | ok: [localhost] 2025-07-12 20:38:49.831778 | orchestrator | 2025-07-12 20:38:49.831789 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-12 20:38:49.831800 | orchestrator | Saturday 12 July 2025 20:38:49 +0000 (0:00:05.459) 0:05:54.066 ********* 2025-07-12 20:38:49.831810 | orchestrator | ok: [localhost] => { 2025-07-12 20:38:49.831821 | orchestrator |  "msg": "192.168.112.136" 2025-07-12 20:38:49.831832 | orchestrator | } 2025-07-12 20:38:49.831843 | orchestrator | 2025-07-12 20:38:49.831871 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:38:49.831882 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:38:49.831894 | orchestrator | 2025-07-12 20:38:49.831905 | orchestrator | 2025-07-12 20:38:49.831916 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:38:49.831927 | orchestrator | Saturday 12 July 2025 20:38:49 +0000 (0:00:00.053) 0:05:54.119 ********* 2025-07-12 20:38:49.831937 | orchestrator | =============================================================================== 2025-07-12 20:38:49.831948 | orchestrator | Create test instances ------------------------------------------------- 191.76s 2025-07-12 20:38:49.831958 | orchestrator | Add tag to instances --------------------------------------------------- 33.86s 2025-07-12 20:38:49.831969 | orchestrator | Add metadata to instances ---------------------------------------------- 24.03s 2025-07-12 20:38:49.831979 | orchestrator | Create test network topology ------------------------------------------- 14.76s 2025-07-12 20:38:49.831990 | orchestrator | Attach test volume ----------------------------------------------------- 13.62s 2025-07-12 20:38:49.832019 | orchestrator | Add member roles to user test ------------------------------------------ 12.97s 2025-07-12 20:38:49.832030 | orchestrator | Create test volume ------------------------------------------------------ 7.87s 2025-07-12 20:38:49.832041 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.88s 2025-07-12 20:38:49.832063 | orchestrator | Create floating ip address ---------------------------------------------- 5.46s 2025-07-12 20:38:49.832074 | orchestrator | Create ssh security group ----------------------------------------------- 5.28s 2025-07-12 20:38:49.832085 | orchestrator | Create test server group ------------------------------------------------ 4.83s 2025-07-12 20:38:49.832095 | orchestrator | Create test-admin user -------------------------------------------------- 4.37s 2025-07-12 20:38:49.832106 | orchestrator | Create test user -------------------------------------------------------- 4.25s 2025-07-12 20:38:49.832116 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.25s 2025-07-12 20:38:49.832127 | orchestrator | Create test project ----------------------------------------------------- 4.19s 2025-07-12 20:38:49.832137 | orchestrator | Create test keypair ----------------------------------------------------- 4.01s 2025-07-12 20:38:49.832148 | orchestrator | Create test domain ------------------------------------------------------ 3.94s 2025-07-12 20:38:49.832158 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.84s 2025-07-12 20:38:49.832169 | orchestrator | Create icmp security group ---------------------------------------------- 3.81s 2025-07-12 20:38:49.832179 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-07-12 20:38:50.155531 | orchestrator | + server_list 2025-07-12 20:38:50.155635 | orchestrator | + openstack --os-cloud test server list 2025-07-12 20:38:54.164311 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 20:38:54.164440 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-12 20:38:54.164451 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 20:38:54.164458 | orchestrator | | a9ea7513-5453-405c-bf3a-7dff6ae3374d | test-4 | ACTIVE | auto_allocated_network=10.42.0.28, 192.168.112.168 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 20:38:54.164465 | orchestrator | | 99650b31-34ae-4f82-ba45-6e0fd343df23 | test-3 | ACTIVE | auto_allocated_network=10.42.0.12, 192.168.112.192 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 20:38:54.164472 | orchestrator | | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | test-2 | ACTIVE | auto_allocated_network=10.42.0.15, 192.168.112.200 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 20:38:54.164478 | orchestrator | | 35cef813-8199-4ddf-9cad-a138373fb1c9 | test-1 | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.110 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 20:38:54.164485 | orchestrator | | bfc3b151-a640-421f-8d7c-3bfe01af42ef | test | ACTIVE | auto_allocated_network=10.42.0.52, 192.168.112.136 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 20:38:54.164492 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 20:38:54.554587 | orchestrator | + openstack --os-cloud test server show test 2025-07-12 20:38:57.872177 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:38:57.872306 | orchestrator | | Field | Value | 2025-07-12 20:38:57.872330 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:38:57.872375 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 20:38:57.872392 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 20:38:57.872416 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 20:38:57.872433 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-12 20:38:57.872450 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 20:38:57.872468 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 20:38:57.872486 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 20:38:57.872504 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 20:38:57.872543 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 20:38:57.872561 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 20:38:57.872578 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 20:38:57.872609 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 20:38:57.872629 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 20:38:57.872655 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 20:38:57.872675 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 20:38:57.872694 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:34:42.000000 | 2025-07-12 20:38:57.872712 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 20:38:57.872731 | orchestrator | | accessIPv4 | | 2025-07-12 20:38:57.872748 | orchestrator | | accessIPv6 | | 2025-07-12 20:38:57.872765 | orchestrator | | addresses | auto_allocated_network=10.42.0.52, 192.168.112.136 | 2025-07-12 20:38:57.872793 | orchestrator | | config_drive | | 2025-07-12 20:38:57.872811 | orchestrator | | created | 2025-07-12T20:34:20Z | 2025-07-12 20:38:57.872838 | orchestrator | | description | None | 2025-07-12 20:38:57.872857 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 20:38:57.872874 | orchestrator | | hostId | a8ffa0e3e47f1b192cf98dd4324522c4a05a7af8a887a9b7bca6f11e | 2025-07-12 20:38:57.872896 | orchestrator | | host_status | None | 2025-07-12 20:38:57.872912 | orchestrator | | id | bfc3b151-a640-421f-8d7c-3bfe01af42ef | 2025-07-12 20:38:57.872928 | orchestrator | | image | Cirros 0.6.2 (6cd5b23e-1ab7-45ef-957e-105c4ac6c74f) | 2025-07-12 20:38:57.872944 | orchestrator | | key_name | test | 2025-07-12 20:38:57.872961 | orchestrator | | locked | False | 2025-07-12 20:38:57.872976 | orchestrator | | locked_reason | None | 2025-07-12 20:38:57.872995 | orchestrator | | name | test | 2025-07-12 20:38:57.873049 | orchestrator | | pinned_availability_zone | None | 2025-07-12 20:38:57.873078 | orchestrator | | progress | 0 | 2025-07-12 20:38:57.873094 | orchestrator | | project_id | 1ab120b1563f4a80ae458262d2195e37 | 2025-07-12 20:38:57.873110 | orchestrator | | properties | hostname='test' | 2025-07-12 20:38:57.873127 | orchestrator | | security_groups | name='ssh' | 2025-07-12 20:38:57.873144 | orchestrator | | | name='icmp' | 2025-07-12 20:38:57.873161 | orchestrator | | server_groups | None | 2025-07-12 20:38:57.873178 | orchestrator | | status | ACTIVE | 2025-07-12 20:38:57.873196 | orchestrator | | tags | test | 2025-07-12 20:38:57.873213 | orchestrator | | trusted_image_certificates | None | 2025-07-12 20:38:57.873241 | orchestrator | | updated | 2025-07-12T20:37:29Z | 2025-07-12 20:38:57.873279 | orchestrator | | user_id | 96b4bee917f14ae1861c60180b8008bf | 2025-07-12 20:38:57.873299 | orchestrator | | volumes_attached | delete_on_termination='False', id='99481ace-3a98-4387-886a-5a9ec04bd603' | 2025-07-12 20:38:57.875839 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:38:58.185552 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-12 20:39:01.496433 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:01.496543 | orchestrator | | Field | Value | 2025-07-12 20:39:01.496576 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:01.496589 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 20:39:01.496600 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 20:39:01.496612 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 20:39:01.496623 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-12 20:39:01.496634 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 20:39:01.496670 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 20:39:01.496682 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 20:39:01.496693 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 20:39:01.496721 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 20:39:01.496733 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 20:39:01.496744 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 20:39:01.496761 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 20:39:01.496772 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 20:39:01.496783 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 20:39:01.496794 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 20:39:01.496805 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:35:25.000000 | 2025-07-12 20:39:01.496825 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 20:39:01.496836 | orchestrator | | accessIPv4 | | 2025-07-12 20:39:01.496847 | orchestrator | | accessIPv6 | | 2025-07-12 20:39:01.496859 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.110 | 2025-07-12 20:39:01.496876 | orchestrator | | config_drive | | 2025-07-12 20:39:01.496888 | orchestrator | | created | 2025-07-12T20:35:04Z | 2025-07-12 20:39:01.496899 | orchestrator | | description | None | 2025-07-12 20:39:01.496914 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 20:39:01.496926 | orchestrator | | hostId | fd136cf2848b95e69c76d842d5693a776e08fb9ce6175c23645bcb78 | 2025-07-12 20:39:01.496937 | orchestrator | | host_status | None | 2025-07-12 20:39:01.496955 | orchestrator | | id | 35cef813-8199-4ddf-9cad-a138373fb1c9 | 2025-07-12 20:39:01.496965 | orchestrator | | image | Cirros 0.6.2 (6cd5b23e-1ab7-45ef-957e-105c4ac6c74f) | 2025-07-12 20:39:01.496977 | orchestrator | | key_name | test | 2025-07-12 20:39:01.496987 | orchestrator | | locked | False | 2025-07-12 20:39:01.496999 | orchestrator | | locked_reason | None | 2025-07-12 20:39:01.497041 | orchestrator | | name | test-1 | 2025-07-12 20:39:01.497060 | orchestrator | | pinned_availability_zone | None | 2025-07-12 20:39:01.497072 | orchestrator | | progress | 0 | 2025-07-12 20:39:01.497088 | orchestrator | | project_id | 1ab120b1563f4a80ae458262d2195e37 | 2025-07-12 20:39:01.497103 | orchestrator | | properties | hostname='test-1' | 2025-07-12 20:39:01.497122 | orchestrator | | security_groups | name='ssh' | 2025-07-12 20:39:01.497151 | orchestrator | | | name='icmp' | 2025-07-12 20:39:01.497169 | orchestrator | | server_groups | None | 2025-07-12 20:39:01.497188 | orchestrator | | status | ACTIVE | 2025-07-12 20:39:01.497207 | orchestrator | | tags | test | 2025-07-12 20:39:01.497227 | orchestrator | | trusted_image_certificates | None | 2025-07-12 20:39:01.497246 | orchestrator | | updated | 2025-07-12T20:37:34Z | 2025-07-12 20:39:01.497269 | orchestrator | | user_id | 96b4bee917f14ae1861c60180b8008bf | 2025-07-12 20:39:01.497281 | orchestrator | | volumes_attached | | 2025-07-12 20:39:01.500454 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:01.800276 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-12 20:39:05.038706 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:05.038814 | orchestrator | | Field | Value | 2025-07-12 20:39:05.038851 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:05.038863 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 20:39:05.038873 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 20:39:05.038883 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 20:39:05.038892 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-12 20:39:05.038902 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 20:39:05.038912 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 20:39:05.038922 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 20:39:05.038944 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 20:39:05.038973 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 20:39:05.038984 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 20:39:05.039041 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 20:39:05.039053 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 20:39:05.039096 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 20:39:05.039114 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 20:39:05.039132 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 20:39:05.039148 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:36:05.000000 | 2025-07-12 20:39:05.039160 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 20:39:05.039183 | orchestrator | | accessIPv4 | | 2025-07-12 20:39:05.039194 | orchestrator | | accessIPv6 | | 2025-07-12 20:39:05.039205 | orchestrator | | addresses | auto_allocated_network=10.42.0.15, 192.168.112.200 | 2025-07-12 20:39:05.039241 | orchestrator | | config_drive | | 2025-07-12 20:39:05.039255 | orchestrator | | created | 2025-07-12T20:35:43Z | 2025-07-12 20:39:05.039268 | orchestrator | | description | None | 2025-07-12 20:39:05.039282 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 20:39:05.039295 | orchestrator | | hostId | 5816cfd4fc55d26f64ac3c36c81d0d73a4ff97fcfab70b16e3ee58ee | 2025-07-12 20:39:05.039307 | orchestrator | | host_status | None | 2025-07-12 20:39:05.039321 | orchestrator | | id | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | 2025-07-12 20:39:05.039344 | orchestrator | | image | Cirros 0.6.2 (6cd5b23e-1ab7-45ef-957e-105c4ac6c74f) | 2025-07-12 20:39:05.039358 | orchestrator | | key_name | test | 2025-07-12 20:39:05.039370 | orchestrator | | locked | False | 2025-07-12 20:39:05.039383 | orchestrator | | locked_reason | None | 2025-07-12 20:39:05.039402 | orchestrator | | name | test-2 | 2025-07-12 20:39:05.039426 | orchestrator | | pinned_availability_zone | None | 2025-07-12 20:39:05.039440 | orchestrator | | progress | 0 | 2025-07-12 20:39:05.039453 | orchestrator | | project_id | 1ab120b1563f4a80ae458262d2195e37 | 2025-07-12 20:39:05.039466 | orchestrator | | properties | hostname='test-2' | 2025-07-12 20:39:05.039479 | orchestrator | | security_groups | name='ssh' | 2025-07-12 20:39:05.039491 | orchestrator | | | name='icmp' | 2025-07-12 20:39:05.039504 | orchestrator | | server_groups | None | 2025-07-12 20:39:05.039516 | orchestrator | | status | ACTIVE | 2025-07-12 20:39:05.039529 | orchestrator | | tags | test | 2025-07-12 20:39:05.039553 | orchestrator | | trusted_image_certificates | None | 2025-07-12 20:39:05.039572 | orchestrator | | updated | 2025-07-12T20:37:38Z | 2025-07-12 20:39:05.039593 | orchestrator | | user_id | 96b4bee917f14ae1861c60180b8008bf | 2025-07-12 20:39:05.039605 | orchestrator | | volumes_attached | | 2025-07-12 20:39:05.041632 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:05.238818 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-12 20:39:08.419039 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:08.419150 | orchestrator | | Field | Value | 2025-07-12 20:39:08.419163 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:08.419171 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 20:39:08.419179 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 20:39:08.419186 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 20:39:08.419193 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-12 20:39:08.419223 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 20:39:08.419231 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 20:39:08.419249 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 20:39:08.419281 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 20:39:08.419303 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 20:39:08.419312 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 20:39:08.419319 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 20:39:08.419326 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 20:39:08.419334 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 20:39:08.419341 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 20:39:08.419382 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 20:39:08.419390 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:36:41.000000 | 2025-07-12 20:39:08.419401 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 20:39:08.419414 | orchestrator | | accessIPv4 | | 2025-07-12 20:39:08.419426 | orchestrator | | accessIPv6 | | 2025-07-12 20:39:08.419439 | orchestrator | | addresses | auto_allocated_network=10.42.0.12, 192.168.112.192 | 2025-07-12 20:39:08.419458 | orchestrator | | config_drive | | 2025-07-12 20:39:08.419470 | orchestrator | | created | 2025-07-12T20:36:26Z | 2025-07-12 20:39:08.419482 | orchestrator | | description | None | 2025-07-12 20:39:08.419525 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 20:39:08.419543 | orchestrator | | hostId | a8ffa0e3e47f1b192cf98dd4324522c4a05a7af8a887a9b7bca6f11e | 2025-07-12 20:39:08.419567 | orchestrator | | host_status | None | 2025-07-12 20:39:08.419582 | orchestrator | | id | 99650b31-34ae-4f82-ba45-6e0fd343df23 | 2025-07-12 20:39:08.419594 | orchestrator | | image | Cirros 0.6.2 (6cd5b23e-1ab7-45ef-957e-105c4ac6c74f) | 2025-07-12 20:39:08.419606 | orchestrator | | key_name | test | 2025-07-12 20:39:08.419633 | orchestrator | | locked | False | 2025-07-12 20:39:08.419642 | orchestrator | | locked_reason | None | 2025-07-12 20:39:08.419651 | orchestrator | | name | test-3 | 2025-07-12 20:39:08.419667 | orchestrator | | pinned_availability_zone | None | 2025-07-12 20:39:08.419677 | orchestrator | | progress | 0 | 2025-07-12 20:39:08.419685 | orchestrator | | project_id | 1ab120b1563f4a80ae458262d2195e37 | 2025-07-12 20:39:08.419693 | orchestrator | | properties | hostname='test-3' | 2025-07-12 20:39:08.419708 | orchestrator | | security_groups | name='ssh' | 2025-07-12 20:39:08.419717 | orchestrator | | | name='icmp' | 2025-07-12 20:39:08.419725 | orchestrator | | server_groups | None | 2025-07-12 20:39:08.419734 | orchestrator | | status | ACTIVE | 2025-07-12 20:39:08.419742 | orchestrator | | tags | test | 2025-07-12 20:39:08.419755 | orchestrator | | trusted_image_certificates | None | 2025-07-12 20:39:08.419764 | orchestrator | | updated | 2025-07-12T20:37:43Z | 2025-07-12 20:39:08.419777 | orchestrator | | user_id | 96b4bee917f14ae1861c60180b8008bf | 2025-07-12 20:39:08.419785 | orchestrator | | volumes_attached | | 2025-07-12 20:39:08.424414 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:08.722886 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-12 20:39:12.008664 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:12.008767 | orchestrator | | Field | Value | 2025-07-12 20:39:12.008783 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:12.008795 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 20:39:12.008806 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 20:39:12.008817 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 20:39:12.008828 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-12 20:39:12.008845 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 20:39:12.008857 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 20:39:12.008868 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 20:39:12.008879 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 20:39:12.008924 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 20:39:12.008938 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 20:39:12.008949 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 20:39:12.008959 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 20:39:12.008970 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 20:39:12.008981 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 20:39:12.008992 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 20:39:12.009007 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:37:14.000000 | 2025-07-12 20:39:12.009046 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 20:39:12.009059 | orchestrator | | accessIPv4 | | 2025-07-12 20:39:12.009072 | orchestrator | | accessIPv6 | | 2025-07-12 20:39:12.009102 | orchestrator | | addresses | auto_allocated_network=10.42.0.28, 192.168.112.168 | 2025-07-12 20:39:12.009129 | orchestrator | | config_drive | | 2025-07-12 20:39:12.009148 | orchestrator | | created | 2025-07-12T20:36:58Z | 2025-07-12 20:39:12.009166 | orchestrator | | description | None | 2025-07-12 20:39:12.009185 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 20:39:12.009199 | orchestrator | | hostId | fd136cf2848b95e69c76d842d5693a776e08fb9ce6175c23645bcb78 | 2025-07-12 20:39:12.009212 | orchestrator | | host_status | None | 2025-07-12 20:39:12.009232 | orchestrator | | id | a9ea7513-5453-405c-bf3a-7dff6ae3374d | 2025-07-12 20:39:12.009266 | orchestrator | | image | Cirros 0.6.2 (6cd5b23e-1ab7-45ef-957e-105c4ac6c74f) | 2025-07-12 20:39:12.009287 | orchestrator | | key_name | test | 2025-07-12 20:39:12.009305 | orchestrator | | locked | False | 2025-07-12 20:39:12.009334 | orchestrator | | locked_reason | None | 2025-07-12 20:39:12.009352 | orchestrator | | name | test-4 | 2025-07-12 20:39:12.009378 | orchestrator | | pinned_availability_zone | None | 2025-07-12 20:39:12.009394 | orchestrator | | progress | 0 | 2025-07-12 20:39:12.009410 | orchestrator | | project_id | 1ab120b1563f4a80ae458262d2195e37 | 2025-07-12 20:39:12.009426 | orchestrator | | properties | hostname='test-4' | 2025-07-12 20:39:12.009443 | orchestrator | | security_groups | name='ssh' | 2025-07-12 20:39:12.009461 | orchestrator | | | name='icmp' | 2025-07-12 20:39:12.009480 | orchestrator | | server_groups | None | 2025-07-12 20:39:12.009505 | orchestrator | | status | ACTIVE | 2025-07-12 20:39:12.009526 | orchestrator | | tags | test | 2025-07-12 20:39:12.009555 | orchestrator | | trusted_image_certificates | None | 2025-07-12 20:39:12.009573 | orchestrator | | updated | 2025-07-12T20:37:48Z | 2025-07-12 20:39:12.009593 | orchestrator | | user_id | 96b4bee917f14ae1861c60180b8008bf | 2025-07-12 20:39:12.009604 | orchestrator | | volumes_attached | | 2025-07-12 20:39:12.017245 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 20:39:12.309425 | orchestrator | + server_ping 2025-07-12 20:39:12.311209 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 20:39:12.311301 | orchestrator | ++ tr -d '\r' 2025-07-12 20:39:15.223987 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:39:15.224124 | orchestrator | + ping -c3 192.168.112.200 2025-07-12 20:39:15.236381 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-07-12 20:39:15.236459 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=6.67 ms 2025-07-12 20:39:16.234588 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.79 ms 2025-07-12 20:39:17.236486 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.27 ms 2025-07-12 20:39:17.236590 | orchestrator | 2025-07-12 20:39:17.236606 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-07-12 20:39:17.236620 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:39:17.236631 | orchestrator | rtt min/avg/max/mdev = 2.271/3.910/6.671/1.963 ms 2025-07-12 20:39:17.237208 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:39:17.237233 | orchestrator | + ping -c3 192.168.112.136 2025-07-12 20:39:17.250646 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-07-12 20:39:17.250734 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=8.26 ms 2025-07-12 20:39:18.246573 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.54 ms 2025-07-12 20:39:19.248253 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.73 ms 2025-07-12 20:39:19.248352 | orchestrator | 2025-07-12 20:39:19.248368 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-07-12 20:39:19.248380 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:39:19.248391 | orchestrator | rtt min/avg/max/mdev = 1.726/4.176/8.259/2.906 ms 2025-07-12 20:39:19.248430 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:39:19.248442 | orchestrator | + ping -c3 192.168.112.192 2025-07-12 20:39:19.260322 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-07-12 20:39:19.260410 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=7.09 ms 2025-07-12 20:39:20.257120 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.23 ms 2025-07-12 20:39:21.258273 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.36 ms 2025-07-12 20:39:21.258392 | orchestrator | 2025-07-12 20:39:21.258416 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-07-12 20:39:21.258435 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:39:21.258454 | orchestrator | rtt min/avg/max/mdev = 1.356/3.559/7.091/2.522 ms 2025-07-12 20:39:21.258472 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:39:21.258490 | orchestrator | + ping -c3 192.168.112.110 2025-07-12 20:39:21.273146 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-07-12 20:39:21.273277 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=8.43 ms 2025-07-12 20:39:22.266924 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.67 ms 2025-07-12 20:39:23.269728 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.32 ms 2025-07-12 20:39:23.269819 | orchestrator | 2025-07-12 20:39:23.269833 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-07-12 20:39:23.269844 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:39:23.269853 | orchestrator | rtt min/avg/max/mdev = 2.316/4.472/8.431/2.802 ms 2025-07-12 20:39:23.269864 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:39:23.269874 | orchestrator | + ping -c3 192.168.112.168 2025-07-12 20:39:23.281250 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2025-07-12 20:39:23.281322 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=7.12 ms 2025-07-12 20:39:24.278963 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=3.08 ms 2025-07-12 20:39:25.280279 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.89 ms 2025-07-12 20:39:25.280400 | orchestrator | 2025-07-12 20:39:25.280426 | orchestrator | --- 192.168.112.168 ping statistics --- 2025-07-12 20:39:25.280446 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:39:25.280481 | orchestrator | rtt min/avg/max/mdev = 1.891/4.031/7.122/2.238 ms 2025-07-12 20:39:25.280501 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-12 20:39:25.280523 | orchestrator | + compute_list 2025-07-12 20:39:25.280542 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 20:39:28.510269 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:39:28.510364 | orchestrator | | ID | Name | Status | 2025-07-12 20:39:28.510373 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:39:28.510380 | orchestrator | | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | test-2 | ACTIVE | 2025-07-12 20:39:28.510386 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:39:28.796734 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 20:39:32.032759 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:39:32.032855 | orchestrator | | ID | Name | Status | 2025-07-12 20:39:32.032864 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:39:32.032872 | orchestrator | | a9ea7513-5453-405c-bf3a-7dff6ae3374d | test-4 | ACTIVE | 2025-07-12 20:39:32.032880 | orchestrator | | 35cef813-8199-4ddf-9cad-a138373fb1c9 | test-1 | ACTIVE | 2025-07-12 20:39:32.032887 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:39:32.335678 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 20:39:35.459504 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:39:35.459632 | orchestrator | | ID | Name | Status | 2025-07-12 20:39:35.459647 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:39:35.459687 | orchestrator | | 99650b31-34ae-4f82-ba45-6e0fd343df23 | test-3 | ACTIVE | 2025-07-12 20:39:35.459699 | orchestrator | | bfc3b151-a640-421f-8d7c-3bfe01af42ef | test | ACTIVE | 2025-07-12 20:39:35.459710 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:39:35.773572 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-07-12 20:39:38.852446 | orchestrator | 2025-07-12 20:39:38 | INFO  | Live migrating server a9ea7513-5453-405c-bf3a-7dff6ae3374d 2025-07-12 20:39:51.972188 | orchestrator | 2025-07-12 20:39:51 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:39:54.607389 | orchestrator | 2025-07-12 20:39:54 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:39:56.976931 | orchestrator | 2025-07-12 20:39:56 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:39:59.599279 | orchestrator | 2025-07-12 20:39:59 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:40:01.861198 | orchestrator | 2025-07-12 20:40:01 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:40:04.638824 | orchestrator | 2025-07-12 20:40:04 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:40:06.938949 | orchestrator | 2025-07-12 20:40:06 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:40:09.294196 | orchestrator | 2025-07-12 20:40:09 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) completed with status ACTIVE 2025-07-12 20:40:09.294306 | orchestrator | 2025-07-12 20:40:09 | INFO  | Live migrating server 35cef813-8199-4ddf-9cad-a138373fb1c9 2025-07-12 20:40:22.314416 | orchestrator | 2025-07-12 20:40:22 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:40:24.687608 | orchestrator | 2025-07-12 20:40:24 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:40:27.049576 | orchestrator | 2025-07-12 20:40:27 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:40:29.519738 | orchestrator | 2025-07-12 20:40:29 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:40:32.221394 | orchestrator | 2025-07-12 20:40:32 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:40:34.590351 | orchestrator | 2025-07-12 20:40:34 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:40:36.923570 | orchestrator | 2025-07-12 20:40:36 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) completed with status ACTIVE 2025-07-12 20:40:37.213256 | orchestrator | + compute_list 2025-07-12 20:40:37.213314 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 20:40:40.322528 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:40:40.322607 | orchestrator | | ID | Name | Status | 2025-07-12 20:40:40.322617 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:40:40.322624 | orchestrator | | a9ea7513-5453-405c-bf3a-7dff6ae3374d | test-4 | ACTIVE | 2025-07-12 20:40:40.322631 | orchestrator | | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | test-2 | ACTIVE | 2025-07-12 20:40:40.322639 | orchestrator | | 35cef813-8199-4ddf-9cad-a138373fb1c9 | test-1 | ACTIVE | 2025-07-12 20:40:40.322646 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:40:40.640269 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 20:40:43.271388 | orchestrator | +------+--------+----------+ 2025-07-12 20:40:43.271460 | orchestrator | | ID | Name | Status | 2025-07-12 20:40:43.271466 | orchestrator | |------+--------+----------| 2025-07-12 20:40:43.271471 | orchestrator | +------+--------+----------+ 2025-07-12 20:40:43.599043 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 20:40:46.770839 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:40:46.770914 | orchestrator | | ID | Name | Status | 2025-07-12 20:40:46.770921 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:40:46.770925 | orchestrator | | 99650b31-34ae-4f82-ba45-6e0fd343df23 | test-3 | ACTIVE | 2025-07-12 20:40:46.770929 | orchestrator | | bfc3b151-a640-421f-8d7c-3bfe01af42ef | test | ACTIVE | 2025-07-12 20:40:46.770933 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:40:47.048133 | orchestrator | + server_ping 2025-07-12 20:40:47.049054 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 20:40:47.049086 | orchestrator | ++ tr -d '\r' 2025-07-12 20:40:49.865969 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:40:49.866157 | orchestrator | + ping -c3 192.168.112.200 2025-07-12 20:40:49.876478 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-07-12 20:40:49.876570 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=6.10 ms 2025-07-12 20:40:50.874765 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.70 ms 2025-07-12 20:40:51.875733 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.97 ms 2025-07-12 20:40:51.875863 | orchestrator | 2025-07-12 20:40:51.875889 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-07-12 20:40:51.875912 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:40:51.875929 | orchestrator | rtt min/avg/max/mdev = 1.965/3.588/6.096/1.798 ms 2025-07-12 20:40:51.876224 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:40:51.876255 | orchestrator | + ping -c3 192.168.112.136 2025-07-12 20:40:51.888749 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-07-12 20:40:51.888837 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=8.25 ms 2025-07-12 20:40:52.885110 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.81 ms 2025-07-12 20:40:53.886419 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.48 ms 2025-07-12 20:40:53.886487 | orchestrator | 2025-07-12 20:40:53.886494 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-07-12 20:40:53.886499 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:40:53.886504 | orchestrator | rtt min/avg/max/mdev = 1.476/4.180/8.254/2.931 ms 2025-07-12 20:40:53.886509 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:40:53.886516 | orchestrator | + ping -c3 192.168.112.192 2025-07-12 20:40:53.898102 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-07-12 20:40:53.898193 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.80 ms 2025-07-12 20:40:54.894756 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.36 ms 2025-07-12 20:40:55.895793 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.39 ms 2025-07-12 20:40:55.895885 | orchestrator | 2025-07-12 20:40:55.895897 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-07-12 20:40:55.895906 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:40:55.895914 | orchestrator | rtt min/avg/max/mdev = 1.385/3.516/6.801/2.356 ms 2025-07-12 20:40:55.896290 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:40:55.896306 | orchestrator | + ping -c3 192.168.112.110 2025-07-12 20:40:55.909094 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-07-12 20:40:55.909146 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=7.36 ms 2025-07-12 20:40:56.906485 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.71 ms 2025-07-12 20:40:57.910270 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=3.91 ms 2025-07-12 20:40:57.910358 | orchestrator | 2025-07-12 20:40:57.910368 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-07-12 20:40:57.910377 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:40:57.910385 | orchestrator | rtt min/avg/max/mdev = 2.706/4.657/7.358/1.971 ms 2025-07-12 20:40:57.910392 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:40:57.910401 | orchestrator | + ping -c3 192.168.112.168 2025-07-12 20:40:57.922504 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2025-07-12 20:40:57.922588 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=9.82 ms 2025-07-12 20:40:58.918499 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=4.24 ms 2025-07-12 20:40:59.918568 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.07 ms 2025-07-12 20:40:59.918666 | orchestrator | 2025-07-12 20:40:59.918681 | orchestrator | --- 192.168.112.168 ping statistics --- 2025-07-12 20:40:59.918693 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:40:59.918704 | orchestrator | rtt min/avg/max/mdev = 2.065/5.374/9.820/3.266 ms 2025-07-12 20:40:59.918716 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-07-12 20:41:02.967053 | orchestrator | 2025-07-12 20:41:02 | INFO  | Live migrating server 99650b31-34ae-4f82-ba45-6e0fd343df23 2025-07-12 20:41:16.059665 | orchestrator | 2025-07-12 20:41:16 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:18.383126 | orchestrator | 2025-07-12 20:41:18 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:20.688188 | orchestrator | 2025-07-12 20:41:20 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:22.976054 | orchestrator | 2025-07-12 20:41:22 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:25.274935 | orchestrator | 2025-07-12 20:41:25 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:27.687693 | orchestrator | 2025-07-12 20:41:27 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:30.043845 | orchestrator | 2025-07-12 20:41:30 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:41:32.303828 | orchestrator | 2025-07-12 20:41:32 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) completed with status ACTIVE 2025-07-12 20:41:32.303935 | orchestrator | 2025-07-12 20:41:32 | INFO  | Live migrating server bfc3b151-a640-421f-8d7c-3bfe01af42ef 2025-07-12 20:41:44.594300 | orchestrator | 2025-07-12 20:41:44 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:41:46.905009 | orchestrator | 2025-07-12 20:41:46 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:41:49.270595 | orchestrator | 2025-07-12 20:41:49 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:41:51.657279 | orchestrator | 2025-07-12 20:41:51 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:41:53.992716 | orchestrator | 2025-07-12 20:41:53 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:41:56.374738 | orchestrator | 2025-07-12 20:41:56 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:41:58.645731 | orchestrator | 2025-07-12 20:41:58 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:42:00.873885 | orchestrator | 2025-07-12 20:42:00 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:42:03.207872 | orchestrator | 2025-07-12 20:42:03 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) completed with status ACTIVE 2025-07-12 20:42:03.526634 | orchestrator | + compute_list 2025-07-12 20:42:03.526731 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 20:42:06.761664 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:42:06.761768 | orchestrator | | ID | Name | Status | 2025-07-12 20:42:06.761782 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:42:06.761795 | orchestrator | | a9ea7513-5453-405c-bf3a-7dff6ae3374d | test-4 | ACTIVE | 2025-07-12 20:42:06.761806 | orchestrator | | 99650b31-34ae-4f82-ba45-6e0fd343df23 | test-3 | ACTIVE | 2025-07-12 20:42:06.761818 | orchestrator | | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | test-2 | ACTIVE | 2025-07-12 20:42:06.761829 | orchestrator | | 35cef813-8199-4ddf-9cad-a138373fb1c9 | test-1 | ACTIVE | 2025-07-12 20:42:06.761840 | orchestrator | | bfc3b151-a640-421f-8d7c-3bfe01af42ef | test | ACTIVE | 2025-07-12 20:42:06.761851 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:42:07.048765 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 20:42:09.762260 | orchestrator | +------+--------+----------+ 2025-07-12 20:42:09.762386 | orchestrator | | ID | Name | Status | 2025-07-12 20:42:09.762413 | orchestrator | |------+--------+----------| 2025-07-12 20:42:09.762434 | orchestrator | +------+--------+----------+ 2025-07-12 20:42:10.071536 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 20:42:12.783819 | orchestrator | +------+--------+----------+ 2025-07-12 20:42:12.783912 | orchestrator | | ID | Name | Status | 2025-07-12 20:42:12.783928 | orchestrator | |------+--------+----------| 2025-07-12 20:42:12.783939 | orchestrator | +------+--------+----------+ 2025-07-12 20:42:13.108235 | orchestrator | + server_ping 2025-07-12 20:42:13.109487 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 20:42:13.109562 | orchestrator | ++ tr -d '\r' 2025-07-12 20:42:16.286273 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:42:16.286375 | orchestrator | + ping -c3 192.168.112.200 2025-07-12 20:42:16.295497 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-07-12 20:42:16.295557 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=6.46 ms 2025-07-12 20:42:17.294239 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.93 ms 2025-07-12 20:42:18.294781 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=2.01 ms 2025-07-12 20:42:18.294909 | orchestrator | 2025-07-12 20:42:18.294938 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-07-12 20:42:18.294959 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:42:18.294978 | orchestrator | rtt min/avg/max/mdev = 2.008/3.799/6.462/1.920 ms 2025-07-12 20:42:18.295290 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:42:18.295320 | orchestrator | + ping -c3 192.168.112.136 2025-07-12 20:42:18.307392 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-07-12 20:42:18.307488 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=7.74 ms 2025-07-12 20:42:19.304590 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.69 ms 2025-07-12 20:42:20.305652 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.73 ms 2025-07-12 20:42:20.305776 | orchestrator | 2025-07-12 20:42:20.305814 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-07-12 20:42:20.305833 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:42:20.305849 | orchestrator | rtt min/avg/max/mdev = 1.727/4.051/7.738/2.636 ms 2025-07-12 20:42:20.306188 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:42:20.306216 | orchestrator | + ping -c3 192.168.112.192 2025-07-12 20:42:20.315552 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-07-12 20:42:20.315633 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.01 ms 2025-07-12 20:42:21.313685 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.76 ms 2025-07-12 20:42:22.315705 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.33 ms 2025-07-12 20:42:22.315805 | orchestrator | 2025-07-12 20:42:22.315818 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-07-12 20:42:22.315829 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:42:22.315839 | orchestrator | rtt min/avg/max/mdev = 2.325/3.698/6.010/1.644 ms 2025-07-12 20:42:22.316210 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:42:22.316243 | orchestrator | + ping -c3 192.168.112.110 2025-07-12 20:42:22.331577 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-07-12 20:42:22.331665 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=9.60 ms 2025-07-12 20:42:23.326816 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.79 ms 2025-07-12 20:42:24.327671 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.03 ms 2025-07-12 20:42:24.327784 | orchestrator | 2025-07-12 20:42:24.327803 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-07-12 20:42:24.327819 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:42:24.327833 | orchestrator | rtt min/avg/max/mdev = 2.028/4.806/9.598/3.402 ms 2025-07-12 20:42:24.327846 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:42:24.327860 | orchestrator | + ping -c3 192.168.112.168 2025-07-12 20:42:24.341121 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2025-07-12 20:42:24.341195 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=8.49 ms 2025-07-12 20:42:25.337064 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.54 ms 2025-07-12 20:42:26.339251 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.18 ms 2025-07-12 20:42:26.339332 | orchestrator | 2025-07-12 20:42:26.339341 | orchestrator | --- 192.168.112.168 ping statistics --- 2025-07-12 20:42:26.339348 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:42:26.339356 | orchestrator | rtt min/avg/max/mdev = 2.182/4.403/8.490/2.893 ms 2025-07-12 20:42:26.339362 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-07-12 20:42:29.866620 | orchestrator | 2025-07-12 20:42:29 | INFO  | Live migrating server a9ea7513-5453-405c-bf3a-7dff6ae3374d 2025-07-12 20:42:40.582987 | orchestrator | 2025-07-12 20:42:40 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:42.935135 | orchestrator | 2025-07-12 20:42:42 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:45.270337 | orchestrator | 2025-07-12 20:42:45 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:47.555416 | orchestrator | 2025-07-12 20:42:47 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:49.907538 | orchestrator | 2025-07-12 20:42:49 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:52.268892 | orchestrator | 2025-07-12 20:42:52 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:54.614442 | orchestrator | 2025-07-12 20:42:54 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:42:56.900966 | orchestrator | 2025-07-12 20:42:56 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) completed with status ACTIVE 2025-07-12 20:42:56.901083 | orchestrator | 2025-07-12 20:42:56 | INFO  | Live migrating server 99650b31-34ae-4f82-ba45-6e0fd343df23 2025-07-12 20:43:08.094384 | orchestrator | 2025-07-12 20:43:08 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:10.469793 | orchestrator | 2025-07-12 20:43:10 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:12.779369 | orchestrator | 2025-07-12 20:43:12 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:15.151003 | orchestrator | 2025-07-12 20:43:15 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:17.476680 | orchestrator | 2025-07-12 20:43:17 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:19.827733 | orchestrator | 2025-07-12 20:43:19 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:22.179356 | orchestrator | 2025-07-12 20:43:22 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:43:24.502907 | orchestrator | 2025-07-12 20:43:24 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) completed with status ACTIVE 2025-07-12 20:43:24.503050 | orchestrator | 2025-07-12 20:43:24 | INFO  | Live migrating server f7b5715f-402c-4fa7-880c-a9f6bd896a69 2025-07-12 20:43:35.249497 | orchestrator | 2025-07-12 20:43:35 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:37.641388 | orchestrator | 2025-07-12 20:43:37 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:39.953023 | orchestrator | 2025-07-12 20:43:39 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:42.300727 | orchestrator | 2025-07-12 20:43:42 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:44.570177 | orchestrator | 2025-07-12 20:43:44 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:46.888309 | orchestrator | 2025-07-12 20:43:46 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:49.170385 | orchestrator | 2025-07-12 20:43:49 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:51.452575 | orchestrator | 2025-07-12 20:43:51 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:43:53.729501 | orchestrator | 2025-07-12 20:43:53 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) completed with status ACTIVE 2025-07-12 20:43:53.729612 | orchestrator | 2025-07-12 20:43:53 | INFO  | Live migrating server 35cef813-8199-4ddf-9cad-a138373fb1c9 2025-07-12 20:44:03.331963 | orchestrator | 2025-07-12 20:44:03 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:44:05.658769 | orchestrator | 2025-07-12 20:44:05 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:44:07.985714 | orchestrator | 2025-07-12 20:44:07 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:44:10.236502 | orchestrator | 2025-07-12 20:44:10 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:44:12.544488 | orchestrator | 2025-07-12 20:44:12 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:44:14.839729 | orchestrator | 2025-07-12 20:44:14 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:44:17.198438 | orchestrator | 2025-07-12 20:44:17 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) completed with status ACTIVE 2025-07-12 20:44:17.198543 | orchestrator | 2025-07-12 20:44:17 | INFO  | Live migrating server bfc3b151-a640-421f-8d7c-3bfe01af42ef 2025-07-12 20:44:27.432336 | orchestrator | 2025-07-12 20:44:27 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:29.752527 | orchestrator | 2025-07-12 20:44:29 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:32.106964 | orchestrator | 2025-07-12 20:44:32 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:34.505500 | orchestrator | 2025-07-12 20:44:34 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:36.842503 | orchestrator | 2025-07-12 20:44:36 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:39.128842 | orchestrator | 2025-07-12 20:44:39 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:41.411436 | orchestrator | 2025-07-12 20:44:41 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:44.173454 | orchestrator | 2025-07-12 20:44:44 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:46.485461 | orchestrator | 2025-07-12 20:44:46 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:44:48.787333 | orchestrator | 2025-07-12 20:44:48 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) completed with status ACTIVE 2025-07-12 20:44:49.097030 | orchestrator | + compute_list 2025-07-12 20:44:49.097130 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 20:44:51.917804 | orchestrator | +------+--------+----------+ 2025-07-12 20:44:51.917922 | orchestrator | | ID | Name | Status | 2025-07-12 20:44:51.917938 | orchestrator | |------+--------+----------| 2025-07-12 20:44:51.917950 | orchestrator | +------+--------+----------+ 2025-07-12 20:44:52.262404 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 20:44:55.616963 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:44:55.617069 | orchestrator | | ID | Name | Status | 2025-07-12 20:44:55.617083 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:44:55.617095 | orchestrator | | a9ea7513-5453-405c-bf3a-7dff6ae3374d | test-4 | ACTIVE | 2025-07-12 20:44:55.617106 | orchestrator | | 99650b31-34ae-4f82-ba45-6e0fd343df23 | test-3 | ACTIVE | 2025-07-12 20:44:55.617117 | orchestrator | | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | test-2 | ACTIVE | 2025-07-12 20:44:55.617127 | orchestrator | | 35cef813-8199-4ddf-9cad-a138373fb1c9 | test-1 | ACTIVE | 2025-07-12 20:44:55.617138 | orchestrator | | bfc3b151-a640-421f-8d7c-3bfe01af42ef | test | ACTIVE | 2025-07-12 20:44:55.617149 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:44:55.930744 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 20:44:58.727620 | orchestrator | +------+--------+----------+ 2025-07-12 20:44:58.727738 | orchestrator | | ID | Name | Status | 2025-07-12 20:44:58.727753 | orchestrator | |------+--------+----------| 2025-07-12 20:44:58.727765 | orchestrator | +------+--------+----------+ 2025-07-12 20:44:59.098708 | orchestrator | + server_ping 2025-07-12 20:44:59.099504 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 20:44:59.099550 | orchestrator | ++ tr -d '\r' 2025-07-12 20:45:02.168759 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:45:02.168886 | orchestrator | + ping -c3 192.168.112.200 2025-07-12 20:45:02.177055 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-07-12 20:45:02.177112 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=5.47 ms 2025-07-12 20:45:03.175798 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=2.70 ms 2025-07-12 20:45:04.176658 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.68 ms 2025-07-12 20:45:04.176780 | orchestrator | 2025-07-12 20:45:04.176796 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-07-12 20:45:04.176808 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:45:04.176819 | orchestrator | rtt min/avg/max/mdev = 1.678/3.282/5.469/1.601 ms 2025-07-12 20:45:04.177410 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:45:04.177441 | orchestrator | + ping -c3 192.168.112.136 2025-07-12 20:45:04.190291 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-07-12 20:45:04.190375 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=8.33 ms 2025-07-12 20:45:05.185576 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=1.97 ms 2025-07-12 20:45:06.187526 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=1.99 ms 2025-07-12 20:45:06.187626 | orchestrator | 2025-07-12 20:45:06.187642 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-07-12 20:45:06.187654 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:45:06.187666 | orchestrator | rtt min/avg/max/mdev = 1.965/4.094/8.329/2.994 ms 2025-07-12 20:45:06.187677 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:45:06.187689 | orchestrator | + ping -c3 192.168.112.192 2025-07-12 20:45:06.197883 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-07-12 20:45:06.197951 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=5.88 ms 2025-07-12 20:45:07.195594 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.20 ms 2025-07-12 20:45:08.196748 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.78 ms 2025-07-12 20:45:08.196858 | orchestrator | 2025-07-12 20:45:08.196874 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-07-12 20:45:08.196887 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 20:45:08.196899 | orchestrator | rtt min/avg/max/mdev = 1.776/3.284/5.875/1.840 ms 2025-07-12 20:45:08.196910 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:45:08.196922 | orchestrator | + ping -c3 192.168.112.110 2025-07-12 20:45:08.209683 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-07-12 20:45:08.209774 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=7.87 ms 2025-07-12 20:45:09.205479 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.50 ms 2025-07-12 20:45:10.207474 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.08 ms 2025-07-12 20:45:10.207554 | orchestrator | 2025-07-12 20:45:10.207564 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-07-12 20:45:10.207571 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:45:10.207578 | orchestrator | rtt min/avg/max/mdev = 2.083/4.148/7.866/2.634 ms 2025-07-12 20:45:10.207585 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:45:10.207592 | orchestrator | + ping -c3 192.168.112.168 2025-07-12 20:45:10.219704 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2025-07-12 20:45:10.219793 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=6.92 ms 2025-07-12 20:45:11.216804 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.38 ms 2025-07-12 20:45:12.217644 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=1.73 ms 2025-07-12 20:45:12.217753 | orchestrator | 2025-07-12 20:45:12.217769 | orchestrator | --- 192.168.112.168 ping statistics --- 2025-07-12 20:45:12.217782 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:45:12.217793 | orchestrator | rtt min/avg/max/mdev = 1.734/3.677/6.915/2.304 ms 2025-07-12 20:45:12.218125 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-07-12 20:45:15.691738 | orchestrator | 2025-07-12 20:45:15 | INFO  | Live migrating server a9ea7513-5453-405c-bf3a-7dff6ae3374d 2025-07-12 20:45:26.748395 | orchestrator | 2025-07-12 20:45:26 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:45:29.115267 | orchestrator | 2025-07-12 20:45:29 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:45:31.596119 | orchestrator | 2025-07-12 20:45:31 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:45:33.890356 | orchestrator | 2025-07-12 20:45:33 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:45:36.134070 | orchestrator | 2025-07-12 20:45:36 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:45:38.385615 | orchestrator | 2025-07-12 20:45:38 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) is still in progress 2025-07-12 20:45:40.663826 | orchestrator | 2025-07-12 20:45:40 | INFO  | Live migration of a9ea7513-5453-405c-bf3a-7dff6ae3374d (test-4) completed with status ACTIVE 2025-07-12 20:45:40.663931 | orchestrator | 2025-07-12 20:45:40 | INFO  | Live migrating server 99650b31-34ae-4f82-ba45-6e0fd343df23 2025-07-12 20:45:51.354698 | orchestrator | 2025-07-12 20:45:51 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:45:53.839100 | orchestrator | 2025-07-12 20:45:53 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:45:56.203808 | orchestrator | 2025-07-12 20:45:56 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:45:58.451571 | orchestrator | 2025-07-12 20:45:58 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:46:00.719053 | orchestrator | 2025-07-12 20:46:00 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:46:02.993301 | orchestrator | 2025-07-12 20:46:02 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:46:05.296573 | orchestrator | 2025-07-12 20:46:05 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:46:07.659402 | orchestrator | 2025-07-12 20:46:07 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) is still in progress 2025-07-12 20:46:09.950604 | orchestrator | 2025-07-12 20:46:09 | INFO  | Live migration of 99650b31-34ae-4f82-ba45-6e0fd343df23 (test-3) completed with status ACTIVE 2025-07-12 20:46:09.950745 | orchestrator | 2025-07-12 20:46:09 | INFO  | Live migrating server f7b5715f-402c-4fa7-880c-a9f6bd896a69 2025-07-12 20:46:21.481216 | orchestrator | 2025-07-12 20:46:21 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:23.827968 | orchestrator | 2025-07-12 20:46:23 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:26.212416 | orchestrator | 2025-07-12 20:46:26 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:28.564359 | orchestrator | 2025-07-12 20:46:28 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:30.841893 | orchestrator | 2025-07-12 20:46:30 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:33.091400 | orchestrator | 2025-07-12 20:46:33 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:35.360653 | orchestrator | 2025-07-12 20:46:35 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) is still in progress 2025-07-12 20:46:37.720200 | orchestrator | 2025-07-12 20:46:37 | INFO  | Live migration of f7b5715f-402c-4fa7-880c-a9f6bd896a69 (test-2) completed with status ACTIVE 2025-07-12 20:46:37.720377 | orchestrator | 2025-07-12 20:46:37 | INFO  | Live migrating server 35cef813-8199-4ddf-9cad-a138373fb1c9 2025-07-12 20:46:48.022818 | orchestrator | 2025-07-12 20:46:48 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:46:50.334093 | orchestrator | 2025-07-12 20:46:50 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:46:52.652108 | orchestrator | 2025-07-12 20:46:52 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:46:55.000931 | orchestrator | 2025-07-12 20:46:54 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:46:57.270233 | orchestrator | 2025-07-12 20:46:57 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:46:59.559609 | orchestrator | 2025-07-12 20:46:59 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:47:01.853557 | orchestrator | 2025-07-12 20:47:01 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) is still in progress 2025-07-12 20:47:04.201857 | orchestrator | 2025-07-12 20:47:04 | INFO  | Live migration of 35cef813-8199-4ddf-9cad-a138373fb1c9 (test-1) completed with status ACTIVE 2025-07-12 20:47:04.201980 | orchestrator | 2025-07-12 20:47:04 | INFO  | Live migrating server bfc3b151-a640-421f-8d7c-3bfe01af42ef 2025-07-12 20:47:13.856006 | orchestrator | 2025-07-12 20:47:13 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:16.205039 | orchestrator | 2025-07-12 20:47:16 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:18.600615 | orchestrator | 2025-07-12 20:47:18 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:20.871829 | orchestrator | 2025-07-12 20:47:20 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:23.135608 | orchestrator | 2025-07-12 20:47:23 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:25.511861 | orchestrator | 2025-07-12 20:47:25 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:27.809511 | orchestrator | 2025-07-12 20:47:27 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:30.184990 | orchestrator | 2025-07-12 20:47:30 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) is still in progress 2025-07-12 20:47:32.541872 | orchestrator | 2025-07-12 20:47:32 | INFO  | Live migration of bfc3b151-a640-421f-8d7c-3bfe01af42ef (test) completed with status ACTIVE 2025-07-12 20:47:32.879634 | orchestrator | + compute_list 2025-07-12 20:47:32.879730 | orchestrator | + osism manage compute list testbed-node-3 2025-07-12 20:47:35.658831 | orchestrator | +------+--------+----------+ 2025-07-12 20:47:35.658929 | orchestrator | | ID | Name | Status | 2025-07-12 20:47:35.658942 | orchestrator | |------+--------+----------| 2025-07-12 20:47:35.658952 | orchestrator | +------+--------+----------+ 2025-07-12 20:47:35.970644 | orchestrator | + osism manage compute list testbed-node-4 2025-07-12 20:47:38.674354 | orchestrator | +------+--------+----------+ 2025-07-12 20:47:38.674462 | orchestrator | | ID | Name | Status | 2025-07-12 20:47:38.674477 | orchestrator | |------+--------+----------| 2025-07-12 20:47:38.674489 | orchestrator | +------+--------+----------+ 2025-07-12 20:47:38.993398 | orchestrator | + osism manage compute list testbed-node-5 2025-07-12 20:47:42.203183 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:47:42.203287 | orchestrator | | ID | Name | Status | 2025-07-12 20:47:42.203360 | orchestrator | |--------------------------------------+--------+----------| 2025-07-12 20:47:42.203384 | orchestrator | | a9ea7513-5453-405c-bf3a-7dff6ae3374d | test-4 | ACTIVE | 2025-07-12 20:47:42.203404 | orchestrator | | 99650b31-34ae-4f82-ba45-6e0fd343df23 | test-3 | ACTIVE | 2025-07-12 20:47:42.203420 | orchestrator | | f7b5715f-402c-4fa7-880c-a9f6bd896a69 | test-2 | ACTIVE | 2025-07-12 20:47:42.203431 | orchestrator | | 35cef813-8199-4ddf-9cad-a138373fb1c9 | test-1 | ACTIVE | 2025-07-12 20:47:42.203442 | orchestrator | | bfc3b151-a640-421f-8d7c-3bfe01af42ef | test | ACTIVE | 2025-07-12 20:47:42.203453 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-12 20:47:42.509484 | orchestrator | + server_ping 2025-07-12 20:47:42.510720 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 20:47:42.510847 | orchestrator | ++ tr -d '\r' 2025-07-12 20:47:45.468767 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:47:45.468873 | orchestrator | + ping -c3 192.168.112.200 2025-07-12 20:47:45.479465 | orchestrator | PING 192.168.112.200 (192.168.112.200) 56(84) bytes of data. 2025-07-12 20:47:45.479542 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=1 ttl=63 time=8.72 ms 2025-07-12 20:47:46.476006 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=2 ttl=63 time=3.08 ms 2025-07-12 20:47:47.476143 | orchestrator | 64 bytes from 192.168.112.200: icmp_seq=3 ttl=63 time=1.90 ms 2025-07-12 20:47:47.476261 | orchestrator | 2025-07-12 20:47:47.476276 | orchestrator | --- 192.168.112.200 ping statistics --- 2025-07-12 20:47:47.476288 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:47:47.476300 | orchestrator | rtt min/avg/max/mdev = 1.898/4.568/8.723/2.977 ms 2025-07-12 20:47:47.477008 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:47:47.477033 | orchestrator | + ping -c3 192.168.112.136 2025-07-12 20:47:47.490482 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2025-07-12 20:47:47.490558 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=9.36 ms 2025-07-12 20:47:48.486581 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=3.53 ms 2025-07-12 20:47:49.488925 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=4.23 ms 2025-07-12 20:47:49.489036 | orchestrator | 2025-07-12 20:47:49.489053 | orchestrator | --- 192.168.112.136 ping statistics --- 2025-07-12 20:47:49.489066 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:47:49.489078 | orchestrator | rtt min/avg/max/mdev = 3.530/5.707/9.364/2.601 ms 2025-07-12 20:47:49.489457 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:47:49.489494 | orchestrator | + ping -c3 192.168.112.192 2025-07-12 20:47:49.504094 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-07-12 20:47:49.504167 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=9.63 ms 2025-07-12 20:47:50.498912 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.51 ms 2025-07-12 20:47:51.500761 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.94 ms 2025-07-12 20:47:51.500901 | orchestrator | 2025-07-12 20:47:51.500919 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-07-12 20:47:51.500932 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:47:51.500944 | orchestrator | rtt min/avg/max/mdev = 1.939/4.693/9.632/3.499 ms 2025-07-12 20:47:51.501090 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:47:51.501158 | orchestrator | + ping -c3 192.168.112.110 2025-07-12 20:47:51.515597 | orchestrator | PING 192.168.112.110 (192.168.112.110) 56(84) bytes of data. 2025-07-12 20:47:51.515670 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=1 ttl=63 time=9.59 ms 2025-07-12 20:47:52.510391 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=2 ttl=63 time=2.23 ms 2025-07-12 20:47:53.511801 | orchestrator | 64 bytes from 192.168.112.110: icmp_seq=3 ttl=63 time=2.20 ms 2025-07-12 20:47:53.511937 | orchestrator | 2025-07-12 20:47:53.511968 | orchestrator | --- 192.168.112.110 ping statistics --- 2025-07-12 20:47:53.511990 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 20:47:53.512074 | orchestrator | rtt min/avg/max/mdev = 2.197/4.673/9.592/3.478 ms 2025-07-12 20:47:53.512098 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 20:47:53.512117 | orchestrator | + ping -c3 192.168.112.168 2025-07-12 20:47:53.522591 | orchestrator | PING 192.168.112.168 (192.168.112.168) 56(84) bytes of data. 2025-07-12 20:47:53.522647 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=1 ttl=63 time=6.50 ms 2025-07-12 20:47:54.520567 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=2 ttl=63 time=2.52 ms 2025-07-12 20:47:55.522008 | orchestrator | 64 bytes from 192.168.112.168: icmp_seq=3 ttl=63 time=2.01 ms 2025-07-12 20:47:55.522353 | orchestrator | 2025-07-12 20:47:55.522390 | orchestrator | --- 192.168.112.168 ping statistics --- 2025-07-12 20:47:55.522411 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 20:47:55.522432 | orchestrator | rtt min/avg/max/mdev = 2.006/3.674/6.497/2.006 ms 2025-07-12 20:47:56.024483 | orchestrator | ok: Runtime: 0:21:43.175765 2025-07-12 20:47:56.091854 | 2025-07-12 20:47:56.091995 | TASK [Run tempest] 2025-07-12 20:47:56.626890 | orchestrator | skipping: Conditional result was False 2025-07-12 20:47:56.645434 | 2025-07-12 20:47:56.645636 | TASK [Check prometheus alert status] 2025-07-12 20:47:57.184000 | orchestrator | skipping: Conditional result was False 2025-07-12 20:47:57.187243 | 2025-07-12 20:47:57.187420 | PLAY RECAP 2025-07-12 20:47:57.187594 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-12 20:47:57.187668 | 2025-07-12 20:47:57.442658 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-12 20:47:57.445023 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 20:47:58.218710 | 2025-07-12 20:47:58.218917 | PLAY [Post output play] 2025-07-12 20:47:58.235929 | 2025-07-12 20:47:58.236089 | LOOP [stage-output : Register sources] 2025-07-12 20:47:58.289159 | 2025-07-12 20:47:58.289396 | TASK [stage-output : Check sudo] 2025-07-12 20:47:59.125569 | orchestrator | sudo: a password is required 2025-07-12 20:47:59.326940 | orchestrator | ok: Runtime: 0:00:00.018334 2025-07-12 20:47:59.343608 | 2025-07-12 20:47:59.343776 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-12 20:47:59.384906 | 2025-07-12 20:47:59.385374 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-12 20:47:59.468604 | orchestrator | ok 2025-07-12 20:47:59.477240 | 2025-07-12 20:47:59.477381 | LOOP [stage-output : Ensure target folders exist] 2025-07-12 20:47:59.943751 | orchestrator | ok: "docs" 2025-07-12 20:47:59.944142 | 2025-07-12 20:48:00.198041 | orchestrator | ok: "artifacts" 2025-07-12 20:48:00.485786 | orchestrator | ok: "logs" 2025-07-12 20:48:00.503465 | 2025-07-12 20:48:00.503734 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-12 20:48:00.549462 | 2025-07-12 20:48:00.549824 | TASK [stage-output : Make all log files readable] 2025-07-12 20:48:00.852641 | orchestrator | ok 2025-07-12 20:48:00.861197 | 2025-07-12 20:48:00.861334 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-12 20:48:00.896315 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:00.906256 | 2025-07-12 20:48:00.906395 | TASK [stage-output : Discover log files for compression] 2025-07-12 20:48:00.930644 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:00.940126 | 2025-07-12 20:48:00.940247 | LOOP [stage-output : Archive everything from logs] 2025-07-12 20:48:00.980868 | 2025-07-12 20:48:00.981034 | PLAY [Post cleanup play] 2025-07-12 20:48:00.988775 | 2025-07-12 20:48:00.988885 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 20:48:01.056108 | orchestrator | ok 2025-07-12 20:48:01.067556 | 2025-07-12 20:48:01.067711 | TASK [Set cloud fact (local deployment)] 2025-07-12 20:48:01.102529 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:01.118419 | 2025-07-12 20:48:01.118640 | TASK [Clean the cloud environment] 2025-07-12 20:48:02.557747 | orchestrator | 2025-07-12 20:48:02 - clean up servers 2025-07-12 20:48:03.348615 | orchestrator | 2025-07-12 20:48:03 - testbed-manager 2025-07-12 20:48:03.462298 | orchestrator | 2025-07-12 20:48:03 - testbed-node-5 2025-07-12 20:48:03.547671 | orchestrator | 2025-07-12 20:48:03 - testbed-node-0 2025-07-12 20:48:03.644029 | orchestrator | 2025-07-12 20:48:03 - testbed-node-1 2025-07-12 20:48:03.735692 | orchestrator | 2025-07-12 20:48:03 - testbed-node-4 2025-07-12 20:48:03.831872 | orchestrator | 2025-07-12 20:48:03 - testbed-node-3 2025-07-12 20:48:03.922913 | orchestrator | 2025-07-12 20:48:03 - testbed-node-2 2025-07-12 20:48:04.015695 | orchestrator | 2025-07-12 20:48:04 - clean up keypairs 2025-07-12 20:48:04.033871 | orchestrator | 2025-07-12 20:48:04 - testbed 2025-07-12 20:48:04.064294 | orchestrator | 2025-07-12 20:48:04 - wait for servers to be gone 2025-07-12 20:48:12.817446 | orchestrator | 2025-07-12 20:48:12 - clean up ports 2025-07-12 20:48:12.987908 | orchestrator | 2025-07-12 20:48:12 - 01776f6e-eb52-4d29-8f22-51e0331b8573 2025-07-12 20:48:13.289956 | orchestrator | 2025-07-12 20:48:13 - 25ac216b-f317-4db4-b305-c519d2fa364d 2025-07-12 20:48:13.560961 | orchestrator | 2025-07-12 20:48:13 - 44311b46-b612-48dd-96bc-1da96dd3882d 2025-07-12 20:48:13.793457 | orchestrator | 2025-07-12 20:48:13 - 55f6b778-f9ff-43de-bfaf-4476a51faf18 2025-07-12 20:48:14.031657 | orchestrator | 2025-07-12 20:48:14 - 5c8dea61-0bff-420e-b1d3-12479ef33cdb 2025-07-12 20:48:14.246527 | orchestrator | 2025-07-12 20:48:14 - d0e55e98-42f1-49b3-8c9e-a128e51c9642 2025-07-12 20:48:14.456191 | orchestrator | 2025-07-12 20:48:14 - f9552312-ef03-44f8-93ef-963add1124cf 2025-07-12 20:48:14.930423 | orchestrator | 2025-07-12 20:48:14 - clean up volumes 2025-07-12 20:48:15.044211 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-2-node-base 2025-07-12 20:48:15.083833 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-1-node-base 2025-07-12 20:48:15.131094 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-4-node-base 2025-07-12 20:48:15.175689 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-manager-base 2025-07-12 20:48:15.219022 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-3-node-base 2025-07-12 20:48:15.264418 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-5-node-base 2025-07-12 20:48:15.311899 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-0-node-base 2025-07-12 20:48:15.350915 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-8-node-5 2025-07-12 20:48:15.391631 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-1-node-4 2025-07-12 20:48:15.436369 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-6-node-3 2025-07-12 20:48:15.482633 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-7-node-4 2025-07-12 20:48:15.529453 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-0-node-3 2025-07-12 20:48:15.573204 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-3-node-3 2025-07-12 20:48:15.613134 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-5-node-5 2025-07-12 20:48:15.653265 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-2-node-5 2025-07-12 20:48:15.695479 | orchestrator | 2025-07-12 20:48:15 - testbed-volume-4-node-4 2025-07-12 20:48:15.735105 | orchestrator | 2025-07-12 20:48:15 - disconnect routers 2025-07-12 20:48:16.302245 | orchestrator | 2025-07-12 20:48:16 - testbed 2025-07-12 20:48:17.178199 | orchestrator | 2025-07-12 20:48:17 - clean up subnets 2025-07-12 20:48:17.218941 | orchestrator | 2025-07-12 20:48:17 - subnet-testbed-management 2025-07-12 20:48:17.386295 | orchestrator | 2025-07-12 20:48:17 - clean up networks 2025-07-12 20:48:17.558367 | orchestrator | 2025-07-12 20:48:17 - net-testbed-management 2025-07-12 20:48:17.842156 | orchestrator | 2025-07-12 20:48:17 - clean up security groups 2025-07-12 20:48:17.879373 | orchestrator | 2025-07-12 20:48:17 - testbed-node 2025-07-12 20:48:17.992857 | orchestrator | 2025-07-12 20:48:17 - testbed-management 2025-07-12 20:48:18.114237 | orchestrator | 2025-07-12 20:48:18 - clean up floating ips 2025-07-12 20:48:18.153996 | orchestrator | 2025-07-12 20:48:18 - 81.163.193.169 2025-07-12 20:48:18.512714 | orchestrator | 2025-07-12 20:48:18 - clean up routers 2025-07-12 20:48:18.620175 | orchestrator | 2025-07-12 20:48:18 - testbed 2025-07-12 20:48:19.680243 | orchestrator | ok: Runtime: 0:00:18.053845 2025-07-12 20:48:19.684827 | 2025-07-12 20:48:19.685017 | PLAY RECAP 2025-07-12 20:48:19.685161 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-12 20:48:19.685233 | 2025-07-12 20:48:19.833273 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 20:48:19.835537 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 20:48:20.606405 | 2025-07-12 20:48:20.606630 | PLAY [Cleanup play] 2025-07-12 20:48:20.626993 | 2025-07-12 20:48:20.627167 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 20:48:20.680671 | orchestrator | ok 2025-07-12 20:48:20.688337 | 2025-07-12 20:48:20.688498 | TASK [Set cloud fact (local deployment)] 2025-07-12 20:48:20.723622 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:20.739697 | 2025-07-12 20:48:20.739906 | TASK [Clean the cloud environment] 2025-07-12 20:48:21.888251 | orchestrator | 2025-07-12 20:48:21 - clean up servers 2025-07-12 20:48:22.375007 | orchestrator | 2025-07-12 20:48:22 - clean up keypairs 2025-07-12 20:48:22.391791 | orchestrator | 2025-07-12 20:48:22 - wait for servers to be gone 2025-07-12 20:48:22.437417 | orchestrator | 2025-07-12 20:48:22 - clean up ports 2025-07-12 20:48:22.515722 | orchestrator | 2025-07-12 20:48:22 - clean up volumes 2025-07-12 20:48:22.574739 | orchestrator | 2025-07-12 20:48:22 - disconnect routers 2025-07-12 20:48:22.606298 | orchestrator | 2025-07-12 20:48:22 - clean up subnets 2025-07-12 20:48:22.624143 | orchestrator | 2025-07-12 20:48:22 - clean up networks 2025-07-12 20:48:22.797540 | orchestrator | 2025-07-12 20:48:22 - clean up security groups 2025-07-12 20:48:22.834670 | orchestrator | 2025-07-12 20:48:22 - clean up floating ips 2025-07-12 20:48:22.862919 | orchestrator | 2025-07-12 20:48:22 - clean up routers 2025-07-12 20:48:23.284943 | orchestrator | ok: Runtime: 0:00:01.379401 2025-07-12 20:48:23.288888 | 2025-07-12 20:48:23.289045 | PLAY RECAP 2025-07-12 20:48:23.289170 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 20:48:23.289234 | 2025-07-12 20:48:23.410464 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 20:48:23.411488 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 20:48:24.179430 | 2025-07-12 20:48:24.179666 | PLAY [Base post-fetch] 2025-07-12 20:48:24.195736 | 2025-07-12 20:48:24.195881 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-12 20:48:24.252441 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:24.266420 | 2025-07-12 20:48:24.266683 | TASK [fetch-output : Set log path for single node] 2025-07-12 20:48:24.325391 | orchestrator | ok 2025-07-12 20:48:24.334576 | 2025-07-12 20:48:24.334722 | LOOP [fetch-output : Ensure local output dirs] 2025-07-12 20:48:24.811367 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/work/logs" 2025-07-12 20:48:25.053696 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/work/artifacts" 2025-07-12 20:48:25.298640 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/b8321735e9ba42e18f9d24de95f698e9/work/docs" 2025-07-12 20:48:25.323464 | 2025-07-12 20:48:25.323686 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-12 20:48:26.213697 | orchestrator | changed: .d..t...... ./ 2025-07-12 20:48:26.214060 | orchestrator | changed: All items complete 2025-07-12 20:48:26.214172 | 2025-07-12 20:48:26.907796 | orchestrator | changed: .d..t...... ./ 2025-07-12 20:48:27.599045 | orchestrator | changed: .d..t...... ./ 2025-07-12 20:48:27.623347 | 2025-07-12 20:48:27.623456 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-12 20:48:27.650896 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:27.653207 | orchestrator | skipping: Conditional result was False 2025-07-12 20:48:27.663879 | 2025-07-12 20:48:27.663947 | PLAY RECAP 2025-07-12 20:48:27.663999 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-12 20:48:27.664025 | 2025-07-12 20:48:27.748167 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 20:48:27.750519 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 20:48:28.425276 | 2025-07-12 20:48:28.425395 | PLAY [Base post] 2025-07-12 20:48:28.438026 | 2025-07-12 20:48:28.438132 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-12 20:48:29.365619 | orchestrator | changed 2025-07-12 20:48:29.374665 | 2025-07-12 20:48:29.374768 | PLAY RECAP 2025-07-12 20:48:29.374854 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-12 20:48:29.374926 | 2025-07-12 20:48:29.458226 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 20:48:29.459106 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-12 20:48:30.225570 | 2025-07-12 20:48:30.225759 | PLAY [Base post-logs] 2025-07-12 20:48:30.237063 | 2025-07-12 20:48:30.237282 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-12 20:48:30.700608 | localhost | changed 2025-07-12 20:48:30.713307 | 2025-07-12 20:48:30.713468 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-12 20:48:30.751985 | localhost | ok 2025-07-12 20:48:30.759227 | 2025-07-12 20:48:30.759437 | TASK [Set zuul-log-path fact] 2025-07-12 20:48:30.777930 | localhost | ok 2025-07-12 20:48:30.789448 | 2025-07-12 20:48:30.789600 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 20:48:30.826249 | localhost | ok 2025-07-12 20:48:30.830894 | 2025-07-12 20:48:30.831035 | TASK [upload-logs : Create log directories] 2025-07-12 20:48:31.346429 | localhost | changed 2025-07-12 20:48:31.349440 | 2025-07-12 20:48:31.349544 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-12 20:48:31.872348 | localhost -> localhost | ok: Runtime: 0:00:00.007219 2025-07-12 20:48:31.879452 | 2025-07-12 20:48:31.879677 | TASK [upload-logs : Upload logs to log server] 2025-07-12 20:48:32.452092 | localhost | Output suppressed because no_log was given 2025-07-12 20:48:32.456679 | 2025-07-12 20:48:32.456870 | LOOP [upload-logs : Compress console log and json output] 2025-07-12 20:48:32.513772 | localhost | skipping: Conditional result was False 2025-07-12 20:48:32.519123 | localhost | skipping: Conditional result was False 2025-07-12 20:48:32.532087 | 2025-07-12 20:48:32.532334 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-12 20:48:32.589224 | localhost | skipping: Conditional result was False 2025-07-12 20:48:32.589524 | 2025-07-12 20:48:32.596137 | localhost | skipping: Conditional result was False 2025-07-12 20:48:32.605943 | 2025-07-12 20:48:32.606192 | LOOP [upload-logs : Upload console log and json output]